Ali Isavudeen, Nicolas Ngan, Eva Dokládalová, M. Akil
{"title":"Highly Scalable Monitoring System on Chip for Multi-Stream Auto-Adaptable Vision System","authors":"Ali Isavudeen, Nicolas Ngan, Eva Dokládalová, M. Akil","doi":"10.1145/3129676.3129721","DOIUrl":null,"url":null,"abstract":"The integration of multiple and technologically heterogeneous sensors (infrared, color, etc) in vision systems tend to democratize. The objective is to benefit from the multimodal perception allowing to improve the quality and robustness of challenging applications such as the advanced driver assistance, 3-D vision, inspection systems or military observation equipment. However, the multiplication of heterogeneous processing pipelines makes the design of efficient computing resources for the multi-sensor systems very arduous task. In addition to the context of latency critical application and limited power budget, the designer has often to consider the parameters of sensors varying dynamically as well as the number of active sensors used at the moment. To optimize the computing resource management, we inspire from the self-aware architectures. We propose an original on-chip monitor, completed by an observation and command network-on-chip allowing the system resources supervision and their on-the-fly adaptation. We present the evaluation of the proposed monitoring solution through FPGA implementation. We estimate the cost of the proposed solution in the terms of surface occupation and latency. And finally, we show that the proposed solution guarantees a processing of 1080p resolution frames at more than 60 fps.","PeriodicalId":326100,"journal":{"name":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the International Conference on Research in Adaptive and Convergent Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3129676.3129721","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The integration of multiple and technologically heterogeneous sensors (infrared, color, etc) in vision systems tend to democratize. The objective is to benefit from the multimodal perception allowing to improve the quality and robustness of challenging applications such as the advanced driver assistance, 3-D vision, inspection systems or military observation equipment. However, the multiplication of heterogeneous processing pipelines makes the design of efficient computing resources for the multi-sensor systems very arduous task. In addition to the context of latency critical application and limited power budget, the designer has often to consider the parameters of sensors varying dynamically as well as the number of active sensors used at the moment. To optimize the computing resource management, we inspire from the self-aware architectures. We propose an original on-chip monitor, completed by an observation and command network-on-chip allowing the system resources supervision and their on-the-fly adaptation. We present the evaluation of the proposed monitoring solution through FPGA implementation. We estimate the cost of the proposed solution in the terms of surface occupation and latency. And finally, we show that the proposed solution guarantees a processing of 1080p resolution frames at more than 60 fps.