首页 > 最新文献

Proceedings of the 9th International Conference on Distributed Smart Cameras最新文献

英文 中文
Real-time distributed video coding simulator for 1K-pixel visual sensor 用于1k像素视觉传感器的实时分布式视频编码模拟器
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2802651
Jan Hanca, N. Deligiannis, A. Munteanu
This demonstrator illustrates the performance of our feedback-channel-free distributed video coding system for extremely low-resolution visual sensors. The demonstrator includes a setup where a low-power sensor capturing 30 x 30 pixels video data is connected to a laptop PC. The video sequence is encoded, decoded and displayed on the computer screen in real-time for side-by-side comparison between the original input and the reconstructed data. A software environment allows the user to adjust all the control parameters of the video codec and to evaluate the influence of changes on the visual quality. The objective performance of the coding system can be monitored in terms of bits per pixel, decoding delays, decoding speed and decoding failures.
该演示演示了我们的无反馈信道分布式视频编码系统在极低分辨率视觉传感器上的性能。该演示包括一个设置,其中一个捕获30 x 30像素视频数据的低功耗传感器连接到笔记本电脑。对视频序列进行编码、解码并实时显示在计算机屏幕上,以便将原始输入数据与重构数据进行并排比较。软件环境允许用户调整视频编解码器的所有控制参数,并评估变化对视觉质量的影响。编码系统的客观性能可以从每像素位数、解码延迟、解码速度和解码失败等方面进行监控。
{"title":"Real-time distributed video coding simulator for 1K-pixel visual sensor","authors":"Jan Hanca, N. Deligiannis, A. Munteanu","doi":"10.1145/2789116.2802651","DOIUrl":"https://doi.org/10.1145/2789116.2802651","url":null,"abstract":"This demonstrator illustrates the performance of our feedback-channel-free distributed video coding system for extremely low-resolution visual sensors. The demonstrator includes a setup where a low-power sensor capturing 30 x 30 pixels video data is connected to a laptop PC. The video sequence is encoded, decoded and displayed on the computer screen in real-time for side-by-side comparison between the original input and the reconstructed data. A software environment allows the user to adjust all the control parameters of the video codec and to evaluate the influence of changes on the visual quality. The objective performance of the coding system can be monitored in terms of bits per pixel, decoding delays, decoding speed and decoding failures.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127154644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A novel hybrid architecture for real-time omnidirectional image reconstruction 一种用于实时全向图像重建的新型混合架构
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2802647
Selman Ergünay, Vladan Popovic, Kerem Seyid, Y. Leblebici
The Panoptic camera is an omnidirectional multi-aperture visual system which is realized by mounting multiple imaging sensors on a hemispherical frame. It is a spherical light-field camera system that records light information from any direction around its center. Omnidirectional light field reconstruction algorithm, its centralized and distributed real-time hardware implementations were previously presented by the authors. In this work, we analyze advantages and disadvantages of previous approaches and propose a novel high performance hybrid architecture based on a tree based network topology for real-time omnidirectional image reconstruction. The novel hybrid architecture increases the scalability of the Panoptic camera systems while utilizing fewer resources. Furthermore, the tree based structure allows implementing further signal processing applications such as omnidirectional feature extraction which was not possible in centralized and distributed implementations.
全景摄像机是一种全向多孔径视觉系统,它通过在半球形框架上安装多个成像传感器来实现。它是一个球形光场相机系统,可以记录来自其中心周围任何方向的光信息。全向光场重建算法及其集中式和分布式实时硬件实现已经在前人的研究中得到了实现。在这项工作中,我们分析了以往方法的优缺点,并提出了一种基于基于树的网络拓扑的新型高性能混合架构,用于实时全向图像重建。这种新型的混合架构增加了Panoptic相机系统的可扩展性,同时使用更少的资源。此外,基于树的结构允许实现进一步的信号处理应用,如全向特征提取,这在集中式和分布式实现中是不可能的。
{"title":"A novel hybrid architecture for real-time omnidirectional image reconstruction","authors":"Selman Ergünay, Vladan Popovic, Kerem Seyid, Y. Leblebici","doi":"10.1145/2789116.2802647","DOIUrl":"https://doi.org/10.1145/2789116.2802647","url":null,"abstract":"The Panoptic camera is an omnidirectional multi-aperture visual system which is realized by mounting multiple imaging sensors on a hemispherical frame. It is a spherical light-field camera system that records light information from any direction around its center. Omnidirectional light field reconstruction algorithm, its centralized and distributed real-time hardware implementations were previously presented by the authors. In this work, we analyze advantages and disadvantages of previous approaches and propose a novel high performance hybrid architecture based on a tree based network topology for real-time omnidirectional image reconstruction. The novel hybrid architecture increases the scalability of the Panoptic camera systems while utilizing fewer resources. Furthermore, the tree based structure allows implementing further signal processing applications such as omnidirectional feature extraction which was not possible in centralized and distributed implementations.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133907160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Reliable multi-object tracking dealing with occlusions for a smart camera 智能相机处理遮挡的可靠多目标跟踪
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789119
Aziz Dziri, M. Duranton, R. Chapuis
In this paper, a multi-object tracking system designed for a low cost embedded smart camera is proposed. Objects tracking constitutes a main step in video-surveillance applications. Because of the number of cameras used to cover a large area, surveillance applications are constrained by the cost of each node, the power efficiency of the system, the robustness of the tracking algorithm and the real-time processing. They require a reliable multi-object tracking algorithm that can run in a real-time on light computing architectures. In this paper, we propose a tracking pipeline designed for a fixed smart camera that can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on the RaspberryPi board equipped with the RaspiCam camera. The tracking quality of the proposed pipeline is evaluated on publicly available datatsets: PETS2009 and CAVIAR.
提出了一种针对低成本嵌入式智能摄像机的多目标跟踪系统。目标跟踪是视频监控应用的主要步骤。由于使用的摄像机数量多,覆盖面积大,监控应用受到每个节点的成本、系统的功率效率、跟踪算法的鲁棒性和实时处理的限制。他们需要一个可靠的多目标跟踪算法,可以在轻计算架构上实时运行。在本文中,我们提出了一种用于固定智能相机的跟踪管道,可以处理物体之间的遮挡。我们证明了所提出的流水线在配备RaspiCam相机的RaspberryPi板上达到实时处理。拟议管道的跟踪质量在公开可用的数据集上进行评估:PETS2009和CAVIAR。
{"title":"Reliable multi-object tracking dealing with occlusions for a smart camera","authors":"Aziz Dziri, M. Duranton, R. Chapuis","doi":"10.1145/2789116.2789119","DOIUrl":"https://doi.org/10.1145/2789116.2789119","url":null,"abstract":"In this paper, a multi-object tracking system designed for a low cost embedded smart camera is proposed. Objects tracking constitutes a main step in video-surveillance applications. Because of the number of cameras used to cover a large area, surveillance applications are constrained by the cost of each node, the power efficiency of the system, the robustness of the tracking algorithm and the real-time processing. They require a reliable multi-object tracking algorithm that can run in a real-time on light computing architectures. In this paper, we propose a tracking pipeline designed for a fixed smart camera that can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on the RaspberryPi board equipped with the RaspiCam camera. The tracking quality of the proposed pipeline is evaluated on publicly available datatsets: PETS2009 and CAVIAR.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132776637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Hardware-oriented feature extraction based on compressive sensing 基于压缩感知的面向硬件的特征提取
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2802657
Marco Trevisi, R. Carmona-Galán, Á. Rodríguez-Vázquez
Feature extraction is used to reduce the amount of resources required to describe a large set of data. A given feature can be represented by a matrix having the same size as the original image but having relevant values only in some specific points. We can consider this sets as being sparse. Under this premise many algorithms have been generated to extract features from compressive samples. None of them though is easily described in hardware. We try to bridge the gap between compressive sensing and hardware design by presenting a sparsifying dictionary that allows compressive sensing reconstruction algorithms to recover features. The idea is to use this work as a starting point to the design of a smart imager capable of compressive feature extraction. To prove this concept we have devised a simulation by using the Harris corner detection and applied a standard reconstruction method, the Nesta algorithm, to retrieve corners instead of a full image.
特征提取用于减少描述大量数据所需的资源。给定的特征可以用与原始图像大小相同,但仅在某些特定点上具有相关值的矩阵来表示。我们可以认为这个集合是稀疏的。在此前提下,产生了许多从压缩样本中提取特征的算法。但它们都无法在硬件中轻易描述。我们试图通过提出一个允许压缩感知重建算法恢复特征的稀疏字典来弥合压缩感知和硬件设计之间的差距。我们的想法是将这项工作作为设计能够压缩特征提取的智能成像仪的起点。为了证明这一概念,我们通过使用哈里斯角点检测设计了一个模拟,并应用标准重建方法Nesta算法来检索角点,而不是完整的图像。
{"title":"Hardware-oriented feature extraction based on compressive sensing","authors":"Marco Trevisi, R. Carmona-Galán, Á. Rodríguez-Vázquez","doi":"10.1145/2789116.2802657","DOIUrl":"https://doi.org/10.1145/2789116.2802657","url":null,"abstract":"Feature extraction is used to reduce the amount of resources required to describe a large set of data. A given feature can be represented by a matrix having the same size as the original image but having relevant values only in some specific points. We can consider this sets as being sparse. Under this premise many algorithms have been generated to extract features from compressive samples. None of them though is easily described in hardware. We try to bridge the gap between compressive sensing and hardware design by presenting a sparsifying dictionary that allows compressive sensing reconstruction algorithms to recover features. The idea is to use this work as a starting point to the design of a smart imager capable of compressive feature extraction. To prove this concept we have devised a simulation by using the Harris corner detection and applied a standard reconstruction method, the Nesta algorithm, to retrieve corners instead of a full image.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"41 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132972700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cost-benefit analysis of an ad-hoc road asset data collection system using fleet-vehicles 使用车队的临时道路资产数据收集系统的成本效益分析
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789146
Dana Pordel, L. Petersson
Keeping inventories of road assets up-to-date is an important activity road authorities and mapping companies face. The information needs to be accurate as it impacts safety compliance, maintenance and ability to efficiently route cars through cities using GPS navigation devices. Such inventories are live documents and need to be updated when additions or other changes occur. Currently, authorities and mapping companies survey the roads for changes using dedicated vehicles, although due to excessive costs they are usually not able to do this more often than every few years. Recent research suggests that the overall costs of a mapping/inventory system can be significantly reduced by using an ad-hoc system of low cost automatic installations in fleet-vehicles such as taxis. This paper proposes a method to performing a cost-benefit analysis of such a system, and then applies this to the specific case of the taxi fleet of Beijing. In particular, the analysis considers the random patterns with which taxis travel over time to estimate coverage, cost as a function of number of installations and benefit as a function of surveying frequency. Since the additional benefit of a higher surveying frequency declines and the total cost of the system increases with the number of installations, the optimal number of installations can be computed that maximises the profit.
保持最新的道路资产清单是道路当局和测绘公司面临的一项重要活动。这些信息需要准确,因为它影响到安全合规、维护以及使用GPS导航设备有效地安排汽车通过城市的能力。此类清单是实时文档,需要在增加或发生其他更改时进行更新。目前,当局和地图公司使用专用车辆调查道路变化,尽管由于成本过高,他们通常只能每隔几年做一次。最近的研究表明,通过在出租车等车队车辆中使用低成本自动安装的特设系统,可以大大降低地图/库存系统的总成本。本文提出了一种对该系统进行成本效益分析的方法,并将其应用于北京出租车车队的具体案例。特别是,该分析考虑了出租车随时间行驶的随机模式,以估计覆盖范围,成本作为安装数量的函数,收益作为测量频率的函数。由于更高的测量频率带来的额外收益会随着安装数量的增加而下降,而系统的总成本会随着安装数量的增加而增加,因此可以计算出使利润最大化的最佳安装数量。
{"title":"A cost-benefit analysis of an ad-hoc road asset data collection system using fleet-vehicles","authors":"Dana Pordel, L. Petersson","doi":"10.1145/2789116.2789146","DOIUrl":"https://doi.org/10.1145/2789116.2789146","url":null,"abstract":"Keeping inventories of road assets up-to-date is an important activity road authorities and mapping companies face. The information needs to be accurate as it impacts safety compliance, maintenance and ability to efficiently route cars through cities using GPS navigation devices. Such inventories are live documents and need to be updated when additions or other changes occur. Currently, authorities and mapping companies survey the roads for changes using dedicated vehicles, although due to excessive costs they are usually not able to do this more often than every few years. Recent research suggests that the overall costs of a mapping/inventory system can be significantly reduced by using an ad-hoc system of low cost automatic installations in fleet-vehicles such as taxis. This paper proposes a method to performing a cost-benefit analysis of such a system, and then applies this to the specific case of the taxi fleet of Beijing. In particular, the analysis considers the random patterns with which taxis travel over time to estimate coverage, cost as a function of number of installations and benefit as a function of surveying frequency. Since the additional benefit of a higher surveying frequency declines and the total cost of the system increases with the number of installations, the optimal number of installations can be computed that maximises the profit.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132959338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A passive RGBD sensor for accurate and real-time depth sensing self-contained into an FPGA 用于精确和实时深度传感的无源RGBD传感器包含在FPGA中
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789148
S. Mattoccia, Matteo Poggi
In this paper we describe the strategy adopted to design, from scratch, an embedded RGBD sensor for accurate and dense depth perception on a low-cost FPGA. This device infers, at more than 30 Hz, dense depth maps according to a state-of-the-art stereo vision processing pipeline entirely mapped into the FPGA without buffering partial results on external memories. The strategy outlined in this paper enables accurate depth computation with a low latency and a simple hardware design. On the other hand, it poses major constraints to the computing structure of the algorithms that fit with this simplified architecture and thus, in this paper, we discuss the solutions devised to overcome these issues. We report experimental results concerned with practical application scenarios in which the proposed RGBD sensor provides accurate and real-time depth sensing suited for the embedded vision domain.
在本文中,我们描述了在低成本FPGA上从零开始设计精确和密集深度感知的嵌入式RGBD传感器所采用的策略。该器件推断,在超过30赫兹,密集的深度图根据一个最先进的立体视觉处理管道完全映射到FPGA没有缓冲部分结果在外部存储器。本文提出的策略能够以低延迟和简单的硬件设计实现精确的深度计算。另一方面,它对适合这种简化架构的算法的计算结构提出了主要限制,因此,在本文中,我们讨论了克服这些问题的解决方案。我们报告了有关实际应用场景的实验结果,其中所提出的RGBD传感器提供了适合嵌入式视觉域的精确和实时深度感知。
{"title":"A passive RGBD sensor for accurate and real-time depth sensing self-contained into an FPGA","authors":"S. Mattoccia, Matteo Poggi","doi":"10.1145/2789116.2789148","DOIUrl":"https://doi.org/10.1145/2789116.2789148","url":null,"abstract":"In this paper we describe the strategy adopted to design, from scratch, an embedded RGBD sensor for accurate and dense depth perception on a low-cost FPGA. This device infers, at more than 30 Hz, dense depth maps according to a state-of-the-art stereo vision processing pipeline entirely mapped into the FPGA without buffering partial results on external memories. The strategy outlined in this paper enables accurate depth computation with a low latency and a simple hardware design. On the other hand, it poses major constraints to the computing structure of the algorithms that fit with this simplified architecture and thus, in this paper, we discuss the solutions devised to overcome these issues. We report experimental results concerned with practical application scenarios in which the proposed RGBD sensor provides accurate and real-time depth sensing suited for the embedded vision domain.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121273968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
Simulation environment for a vision-system-on-chip with integrated processing 集成处理的视觉片上系统仿真环境
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789133
Peter Reichel, Christoph Hoppe, Jens Döge, Nico Peter
Imagers with programmable, highly parallel signal processing execute computationally intensive processing steps directly on the sensor, thereby allowing early reduction of the amount of data to relevant features. For the purposes of architectural exploration during development of a novel Vision-System-on-Chip (VSoC), it has been modelled on system level. Aside from the integrated control unit with multiple independent control flows, the model also realises digital and analogue signal processing. Due to high simulation speed and compatibility with the real system, especially regarding the programs to be executed, the resulting simulation model is very well suited for usage during application development. By providing the ability to purposefully introduce parameter deviations or defects at various points of analogue processing, it becomes possible to study them with respect to their influence on image processing algorithms executed within the VSoC.
具有可编程,高度并行信号处理的成像仪直接在传感器上执行计算密集型处理步骤,从而允许早期减少相关特征的数据量。为了在开发一种新型的视觉片上系统(VSoC)的过程中进行架构探索,它已经在系统级上建模。除了具有多个独立控制流的集成控制单元外,该模型还实现了数字和模拟信号处理。由于仿真速度快,并且与实际系统兼容,特别是在要执行的程序方面,所得到的仿真模型非常适合在应用程序开发期间使用。通过提供在模拟处理的各个点有目的地引入参数偏差或缺陷的能力,可以研究它们对VSoC内执行的图像处理算法的影响。
{"title":"Simulation environment for a vision-system-on-chip with integrated processing","authors":"Peter Reichel, Christoph Hoppe, Jens Döge, Nico Peter","doi":"10.1145/2789116.2789133","DOIUrl":"https://doi.org/10.1145/2789116.2789133","url":null,"abstract":"Imagers with programmable, highly parallel signal processing execute computationally intensive processing steps directly on the sensor, thereby allowing early reduction of the amount of data to relevant features. For the purposes of architectural exploration during development of a novel Vision-System-on-Chip (VSoC), it has been modelled on system level. Aside from the integrated control unit with multiple independent control flows, the model also realises digital and analogue signal processing. Due to high simulation speed and compatibility with the real system, especially regarding the programs to be executed, the resulting simulation model is very well suited for usage during application development. By providing the ability to purposefully introduce parameter deviations or defects at various points of analogue processing, it becomes possible to study them with respect to their influence on image processing algorithms executed within the VSoC.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128428808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The eyes of things project 事物的眼睛会凸出来
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2802648
Noelia Vállez, José Luis Espinosa-Aranda, O. Déniz-Suárez, Daniel Aguado-Araujo, Gloria Bueno García, Carlos Sanchez-Bueno
The Eyes of Things (EoT) EU H2020 project envisages a computer vision platform that can be used both standalone and embedded into more complex artifacts, particularly for wearable applications, robotics, home products, surveillance etc. The core hardware will be based on a Software on Chip (SoC) that has been designed for maximum performance of the always-demanding vision applications while keeping the lowest energy consumption. This will allow "always on" and truly mobile vision processing. This demo presents the first prototype applications developed within EoT. First, example vision processing applications will be shown. Additionally, an RTSP server implemented in the device will be demonstrated. This server can capture and stream images. Finally, connectivity will be shown using a minimal MQTT broker specifically implemented for the device.
物之眼(EoT) EU H2020项目设想了一个计算机视觉平台,可以独立使用,也可以嵌入到更复杂的工件中,特别是可穿戴应用、机器人、家庭产品、监控等。核心硬件将基于片上软件(SoC),该芯片旨在为始终要求苛刻的视觉应用提供最大性能,同时保持最低的能耗。这将实现“永远在线”和真正的移动视觉处理。这个演示展示了在EoT中开发的第一个原型应用程序。首先,将展示视觉处理应用示例。此外,还将演示在设备中实现的RTSP服务器。该服务器可以捕获和传输图像。最后,将使用专门为设备实现的最小MQTT代理显示连通性。
{"title":"The eyes of things project","authors":"Noelia Vállez, José Luis Espinosa-Aranda, O. Déniz-Suárez, Daniel Aguado-Araujo, Gloria Bueno García, Carlos Sanchez-Bueno","doi":"10.1145/2789116.2802648","DOIUrl":"https://doi.org/10.1145/2789116.2802648","url":null,"abstract":"The Eyes of Things (EoT) EU H2020 project envisages a computer vision platform that can be used both standalone and embedded into more complex artifacts, particularly for wearable applications, robotics, home products, surveillance etc. The core hardware will be based on a Software on Chip (SoC) that has been designed for maximum performance of the always-demanding vision applications while keeping the lowest energy consumption. This will allow \"always on\" and truly mobile vision processing. This demo presents the first prototype applications developed within EoT. First, example vision processing applications will be shown. Additionally, an RTSP server implemented in the device will be demonstrated. This server can capture and stream images. Finally, connectivity will be shown using a minimal MQTT broker specifically implemented for the device.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"2766 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127439847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-view gait recognition on curved trajectories 基于曲线轨迹的多视角步态识别
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789122
D. López-Fernández, F. J. Madrid-Cuevas, Ángel Carmona Poyato, R. Muñoz-Salinas, R. Carnicer
Appearance changes due to viewing angle changes cause difficulties for most of the gait recognition methods. In this paper, we propose a new approach for multi-view recognition, which allows to recognize people walking on curved paths. The recognition is based on 3D angular analysis of the movement of the walking human. A coarse-to-fine gait signature represents local variations on the angular measurements along time. A Support Vector Machine is used for classifying, and a sliding temporal window for majority vote policy is used to smooth and reinforce the classification results. The proposed approach has been experimentally validated on the publicly available "Kyushu University 4D Gait Database". The results show that this new approach achieves promising results in the problem of gait recognition on curved paths.
由于视角的变化而引起的外观变化给大多数步态识别方法带来了困难。在本文中,我们提出了一种新的多视图识别方法,该方法可以识别在弯曲路径上行走的人。该识别是基于对行走人体运动的三维角度分析。从粗到细的步态特征表示角测量随时间的局部变化。使用支持向量机进行分类,使用多数投票策略的滑动时间窗口对分类结果进行平滑和强化。所提出的方法已在公开可用的“九州大学4D步态数据库”上进行了实验验证。结果表明,该方法在曲线路径的步态识别问题上取得了良好的效果。
{"title":"Multi-view gait recognition on curved trajectories","authors":"D. López-Fernández, F. J. Madrid-Cuevas, Ángel Carmona Poyato, R. Muñoz-Salinas, R. Carnicer","doi":"10.1145/2789116.2789122","DOIUrl":"https://doi.org/10.1145/2789116.2789122","url":null,"abstract":"Appearance changes due to viewing angle changes cause difficulties for most of the gait recognition methods. In this paper, we propose a new approach for multi-view recognition, which allows to recognize people walking on curved paths. The recognition is based on 3D angular analysis of the movement of the walking human. A coarse-to-fine gait signature represents local variations on the angular measurements along time. A Support Vector Machine is used for classifying, and a sliding temporal window for majority vote policy is used to smooth and reinforce the classification results. The proposed approach has been experimentally validated on the publicly available \"Kyushu University 4D Gait Database\". The results show that this new approach achieves promising results in the problem of gait recognition on curved paths.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130803976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
People tracking with multi-camera system 多摄像头跟踪系统
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789141
J. Dias, P. Jorge
This paper presents a method for tracking people using multiple cameras. The system is implemented with a two level processing strategy. In low-level, object trajectories are detected on each camera image sequence (track detection). This procedure involves active region extraction and matching. In high-level, all the trajectories extracted from the multi camera system are related in order to create a global view (track matching). This is accomplished by homography transformations between image planes. The total set of detected trajectories and there relations is represented by a graph. Experimental results are preformed with recorded data sets and PETS2001 sequence.
本文提出了一种利用多台摄像机对人进行跟踪的方法。该系统采用两级处理策略实现。在低级,在每个相机图像序列上检测目标轨迹(轨迹检测)。这个过程包括活动区域的提取和匹配。在高层次上,从多相机系统中提取的所有轨迹都是相互关联的,以创建一个全局视图(轨迹匹配)。这是通过图像平面之间的单应变换来实现的。检测到的轨迹及其关系的总集用图表示。实验结果用记录的数据集和PETS2001序列进行了验证。
{"title":"People tracking with multi-camera system","authors":"J. Dias, P. Jorge","doi":"10.1145/2789116.2789141","DOIUrl":"https://doi.org/10.1145/2789116.2789141","url":null,"abstract":"This paper presents a method for tracking people using multiple cameras. The system is implemented with a two level processing strategy. In low-level, object trajectories are detected on each camera image sequence (track detection). This procedure involves active region extraction and matching. In high-level, all the trajectories extracted from the multi camera system are related in order to create a global view (track matching). This is accomplished by homography transformations between image planes. The total set of detected trajectories and there relations is represented by a graph. Experimental results are preformed with recorded data sets and PETS2001 sequence.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117198349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings of the 9th International Conference on Distributed Smart Cameras
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1