首页 > 最新文献

Proceedings of the 9th International Conference on Distributed Smart Cameras最新文献

英文 中文
Camera calibration parameters for oriented person re-identification 定向人再识别的摄像机标定参数
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789138
Alfredo Gardel Vicente, Jorge García, I. B. Muñoz, F. Espinosa, T. Chateau
Person re-identification is a challenging task when there exist strong appearance changes for different viewpoints of the person captured by a distributed camera network. To better solve this issue a multi-view oriented model of the person has been proposed. In this paper, we analyze the camera calibration parameters required to be used in the oriented people re-identification and propose a method to retrieve those values to be used in the capture of people perspectives with different known orientations respect to the camera. Usually, individual camera calibration parameters on a large distributed camera network are not available. A self-calibration method through the usage of short-term trackers of multiple persons is proposed. Only two extrinsic camera calibration parameters are required. Experimental results based on the processing of different public datasets demonstrate the effectiveness of our approach.
当分布式摄像机网络捕捉到的人的不同视角存在强烈的外观变化时,对人的再识别是一项具有挑战性的任务。为了更好地解决这一问题,提出了一种面向多视图的人模型。在本文中,我们分析了定向人物再识别中需要用到的摄像机标定参数,并提出了一种获取这些参数值的方法,用于捕获相对于摄像机具有不同已知方向的人物视角。通常,大型分布式摄像机网络中的单个摄像机标定参数是不可用的。提出了一种利用多人短期跟踪器的自校准方法。只需要两个外部摄像机校准参数。基于不同公共数据集处理的实验结果证明了该方法的有效性。
{"title":"Camera calibration parameters for oriented person re-identification","authors":"Alfredo Gardel Vicente, Jorge García, I. B. Muñoz, F. Espinosa, T. Chateau","doi":"10.1145/2789116.2789138","DOIUrl":"https://doi.org/10.1145/2789116.2789138","url":null,"abstract":"Person re-identification is a challenging task when there exist strong appearance changes for different viewpoints of the person captured by a distributed camera network. To better solve this issue a multi-view oriented model of the person has been proposed. In this paper, we analyze the camera calibration parameters required to be used in the oriented people re-identification and propose a method to retrieve those values to be used in the capture of people perspectives with different known orientations respect to the camera. Usually, individual camera calibration parameters on a large distributed camera network are not available. A self-calibration method through the usage of short-term trackers of multiple persons is proposed. Only two extrinsic camera calibration parameters are required. Experimental results based on the processing of different public datasets demonstrate the effectiveness of our approach.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121215492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Quasar - a new programming framework for real-time image/video processing on GPU and CPU Quasar -一个新的编程框架,用于GPU和CPU上的实时图像/视频处理
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2802654
B. Goossens, Jonas De Vylder, S. Donné, W. Philips
In this demonstration, we present a new programming framework, Quasar, for heterogeneous programming on CPU and single/multi-GPU. Our programming framework consists of a high-level language that is aimed at relieving the programmer from hardware-related implementation issues that commonly occur in CPU/GPU programming, allowing the programmer to focus on the specification, the design, testing and the improvement of the algorithms. We will demonstrate a real-time multi-camera processing application using our integrated development environment (IDE). The IDE offers various image/video processing-related debugging functions and performance profiling features.
在这个演示中,我们提出了一个新的编程框架,类星体,用于CPU和单/多gpu的异构编程。我们的编程框架由一种高级语言组成,旨在将程序员从CPU/GPU编程中常见的硬件相关实现问题中解脱出来,使程序员能够专注于规范、设计、测试和算法改进。我们将使用集成开发环境(IDE)演示一个实时多摄像头处理应用程序。IDE提供了各种与图像/视频处理相关的调试功能和性能分析特性。
{"title":"Quasar - a new programming framework for real-time image/video processing on GPU and CPU","authors":"B. Goossens, Jonas De Vylder, S. Donné, W. Philips","doi":"10.1145/2789116.2802654","DOIUrl":"https://doi.org/10.1145/2789116.2802654","url":null,"abstract":"In this demonstration, we present a new programming framework, Quasar, for heterogeneous programming on CPU and single/multi-GPU. Our programming framework consists of a high-level language that is aimed at relieving the programmer from hardware-related implementation issues that commonly occur in CPU/GPU programming, allowing the programmer to focus on the specification, the design, testing and the improvement of the algorithms. We will demonstrate a real-time multi-camera processing application using our integrated development environment (IDE). The IDE offers various image/video processing-related debugging functions and performance profiling features.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124958501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A hybrid pose tracking approach for handheld augmented reality 手持增强现实的混合姿态跟踪方法
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789128
Juan Li, Maarten Slembrouck, Francis Deboeverie, A. Bernardos, J. Besada, P. Veelaert, H. Aghajan, W. Philips, J. Casar
With the rapid advances in mobile computing, handheld Augmented Reality draws increasing attention. Pose tracking of handheld devices is of fundamental importance to register virtual information with the real world and is still a crucial challenge. In this paper, we present a low-cost, accurate and robust approach combining fiducial tracking and inertial sensors for handheld pose tracking. Two LEDs are used as fiducial markers to indicate the position of the handheld device. They are detected by an adaptive thresholding method which is robust to illumination changes, and then tracked by a Kalman filter. By combining inclination information provided by the on-device accelerometer, 6 degree-of-freedom (DoF) pose is estimated. Handheld devices are freed from computer vision processing, leaving most computing power available for applications. When one LED is occluded, the system is still able to recover the 6-DoF pose. Performance evaluation of the proposed tracking approach is carried out by comparing with the ground truth data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved an accuracy of 1.77 cm in position estimation and 4.15 degrees in orientation estimation.
随着移动计算技术的飞速发展,手持增强现实技术越来越受到人们的关注。手持设备的姿态跟踪对虚拟信息与现实世界的匹配至关重要,仍然是一个关键的挑战。在本文中,我们提出了一种结合基准跟踪和惯性传感器的低成本、精确和鲁棒的手持姿态跟踪方法。两个led用作基准标记,以指示手持设备的位置。采用对光照变化具有较强鲁棒性的自适应阈值法进行检测,然后采用卡尔曼滤波进行跟踪。通过结合设备上加速度计提供的倾角信息,估计出6个自由度(DoF)姿态。手持设备从计算机视觉处理中解放出来,将大部分计算能力留给应用程序。当一个LED被遮挡时,系统仍然能够恢复6自由度姿势。通过与最先进的商业运动跟踪系统OptiTrack生成的地面真实数据进行比较,对所提出的跟踪方法进行了性能评估。实验结果表明,该系统的位置估计精度为1.77 cm,方向估计精度为4.15°。
{"title":"A hybrid pose tracking approach for handheld augmented reality","authors":"Juan Li, Maarten Slembrouck, Francis Deboeverie, A. Bernardos, J. Besada, P. Veelaert, H. Aghajan, W. Philips, J. Casar","doi":"10.1145/2789116.2789128","DOIUrl":"https://doi.org/10.1145/2789116.2789128","url":null,"abstract":"With the rapid advances in mobile computing, handheld Augmented Reality draws increasing attention. Pose tracking of handheld devices is of fundamental importance to register virtual information with the real world and is still a crucial challenge. In this paper, we present a low-cost, accurate and robust approach combining fiducial tracking and inertial sensors for handheld pose tracking. Two LEDs are used as fiducial markers to indicate the position of the handheld device. They are detected by an adaptive thresholding method which is robust to illumination changes, and then tracked by a Kalman filter. By combining inclination information provided by the on-device accelerometer, 6 degree-of-freedom (DoF) pose is estimated. Handheld devices are freed from computer vision processing, leaving most computing power available for applications. When one LED is occluded, the system is still able to recover the 6-DoF pose. Performance evaluation of the proposed tracking approach is carried out by comparing with the ground truth data generated by the state-of-the-art commercial motion tracking system OptiTrack. Experimental results show that the proposed system has achieved an accuracy of 1.77 cm in position estimation and 4.15 degrees in orientation estimation.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133199630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Multi-camera head pose estimation using an ensemble of exemplars 基于样本集合的多相机头部姿态估计
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789123
Scott Spurlock, Peter Malmgren, Hui Wu, Richard Souvenir
We present a method for head pose estimation for moving targets in multi-camera environments. Our approach utilizes an ensemble of exemplar classifiers for joint head detection and pose estimation and provides finer-grained predictions than previous approaches. We incorporate dynamic camera selection, which allows a variable number of cameras to be selected at each time step and provides a tunable trade-off between accuracy and speed. On a benchmark dataset for multi-camera head pose estimation, our method predicts head pan angle with a mean absolute error of ~ 8° for different moving targets.
提出了一种多摄像机环境下运动目标的头部姿态估计方法。我们的方法利用一个范例分类器的集合来进行关节头部检测和姿态估计,并提供比以前的方法更细粒度的预测。我们结合了动态相机选择,它允许在每个时间步选择可变数量的相机,并在精度和速度之间提供可调的权衡。在多摄像机头部姿态估计的基准数据集上,我们的方法预测不同运动目标的头部平移角,平均绝对误差约为8°。
{"title":"Multi-camera head pose estimation using an ensemble of exemplars","authors":"Scott Spurlock, Peter Malmgren, Hui Wu, Richard Souvenir","doi":"10.1145/2789116.2789123","DOIUrl":"https://doi.org/10.1145/2789116.2789123","url":null,"abstract":"We present a method for head pose estimation for moving targets in multi-camera environments. Our approach utilizes an ensemble of exemplar classifiers for joint head detection and pose estimation and provides finer-grained predictions than previous approaches. We incorporate dynamic camera selection, which allows a variable number of cameras to be selected at each time step and provides a tunable trade-off between accuracy and speed. On a benchmark dataset for multi-camera head pose estimation, our method predicts head pan angle with a mean absolute error of ~ 8° for different moving targets.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129338297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Video-based activity level recognition for assisted living using motion features 基于视频的活动水平识别辅助生活使用运动特征
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789140
Sandipan Pal, G. Abhayaratne
Activities of daily living of the elderly is often monitored using passive sensor networks. With the reduction of camera prices, there is a growing interest of video-based approaches to provide a smart, safe and independent living environment for the elderly. In this paper, activity level in context of tracking the movement pattern of an individual as a metric to monitor the daily living of the elderly is explored. Activity levels can be an effective indicator that would denote the amount of busyness of an individual by modelling motion features over time. The novel framework uses two different variants of the motion features captured from two camera angles and classifies them into different activity levels using neural networks. A new dataset for assisted living research called the Sheffield Activities of Daily Living (SADL) dataset is used where each activity is simulated by 6 subjects and is captured under two different illumination conditions within a simulated assisted living environment. The experiments show that the overall detection rate using a single camera setup and a dual camera setup is above 80%.
老年人的日常生活活动通常使用无源传感器网络进行监测。随着摄像头价格的下降,人们越来越关注基于视频的方式,为老年人提供智能、安全和独立的生活环境。在本文中,活动水平在跟踪一个人的运动模式的背景下,作为一个指标来监测老年人的日常生活进行了探讨。活动水平可以是一个有效的指标,通过建模运动特征来表示一个人的忙碌程度。新框架使用从两个摄像机角度捕获的运动特征的两种不同变体,并使用神经网络将它们分类为不同的活动水平。使用了一个新的辅助生活研究数据集,称为谢菲尔德日常生活活动(SADL)数据集,其中每个活动由6名受试者模拟,并在模拟的辅助生活环境中在两种不同的照明条件下捕获。实验表明,单摄像头设置和双摄像头设置的总体检测率都在80%以上。
{"title":"Video-based activity level recognition for assisted living using motion features","authors":"Sandipan Pal, G. Abhayaratne","doi":"10.1145/2789116.2789140","DOIUrl":"https://doi.org/10.1145/2789116.2789140","url":null,"abstract":"Activities of daily living of the elderly is often monitored using passive sensor networks. With the reduction of camera prices, there is a growing interest of video-based approaches to provide a smart, safe and independent living environment for the elderly. In this paper, activity level in context of tracking the movement pattern of an individual as a metric to monitor the daily living of the elderly is explored. Activity levels can be an effective indicator that would denote the amount of busyness of an individual by modelling motion features over time. The novel framework uses two different variants of the motion features captured from two camera angles and classifies them into different activity levels using neural networks. A new dataset for assisted living research called the Sheffield Activities of Daily Living (SADL) dataset is used where each activity is simulated by 6 subjects and is captured under two different illumination conditions within a simulated assisted living environment. The experiments show that the overall detection rate using a single camera setup and a dual camera setup is above 80%.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122832058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Abnormal work cycle detection based on dissimilarity measurement of trajectories 基于轨迹不相似度测量的异常工作周期检测
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789142
Xingzhe Xie, Dimitri Van Cauwelaert, Maarten Slembrouck, Karel Bauters, Johannes Cottyn, D. V. Haerenborgh, H. Aghajan, P. Veelaert, W. Philips
This paper proposes a method for detecting the abnormalities of the executed work cycles for the factory workers using their tracks obtained in a multi-camera network. The method allows analyzing both spatial and temporal dissimilarity between the pairwise tracks. The main novelty of the methods is calculating spatial dissimilarity between pair-wise tracks by aligning them using Dynamic Time Warping (DTW) based on coordinate distance, and specially the velocity and dwell time dissimilarity using a different track alignment based on velocity difference. These dissimilarity measurements are used to cluster the executed work cycles and detect abnormalities. The experimental results show that our algorithm outperforms other methods on clustering the tracks because of the use of temporal dissimilarity.
本文提出了一种利用多摄像机网络获取的工人轨迹来检测工厂工人工作周期异常的方法。该方法允许分析成对轨迹之间的空间和时间差异。该方法的主要新颖之处在于利用基于坐标距离的动态时间翘曲(DTW)对成对轨迹进行对齐,从而计算轨迹之间的空间不相似度,特别是利用基于速度差的不同轨迹对齐来计算速度和停留时间的不相似度。这些不相似度测量用于对已执行的工作周期进行聚类并检测异常情况。实验结果表明,由于使用了时间不相似性,我们的算法在轨道聚类方面优于其他方法。
{"title":"Abnormal work cycle detection based on dissimilarity measurement of trajectories","authors":"Xingzhe Xie, Dimitri Van Cauwelaert, Maarten Slembrouck, Karel Bauters, Johannes Cottyn, D. V. Haerenborgh, H. Aghajan, P. Veelaert, W. Philips","doi":"10.1145/2789116.2789142","DOIUrl":"https://doi.org/10.1145/2789116.2789142","url":null,"abstract":"This paper proposes a method for detecting the abnormalities of the executed work cycles for the factory workers using their tracks obtained in a multi-camera network. The method allows analyzing both spatial and temporal dissimilarity between the pairwise tracks. The main novelty of the methods is calculating spatial dissimilarity between pair-wise tracks by aligning them using Dynamic Time Warping (DTW) based on coordinate distance, and specially the velocity and dwell time dissimilarity using a different track alignment based on velocity difference. These dissimilarity measurements are used to cluster the executed work cycles and detect abnormalities. The experimental results show that our algorithm outperforms other methods on clustering the tracks because of the use of temporal dissimilarity.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117335083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel image gradient extraction core for FPGA-based smart cameras 基于fpga的智能相机并行图像梯度提取核心
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789139
Luca Maggiani, C. Bourrasset, F. Berry, J. Sérot, M. Petracca, C. Salvadori
One of the biggest efforts in designing pervasive Smart Camera Networks (SCNs) is the implementation of complex and computationally intensive computer vision algorithms on resource constrained embedded devices. For low-level processing FPGA devices are excellent candidates because they support massive and fine grain data parallelism with high data throughput. However, if FPGAs offers a way to meet the stringent constraints of real-time execution, their exploitation often require significant algorithmic reformulations. In this paper, we propose a reformulation of a kernel-based gradient computation module specially suited to FPGA implementations. This resulting algorithm operates on-the-fly, without the need of video buffers and delivers a constant throughput. It has been tested and used as the first stage of an application performing extraction of Histograms of Oriented Gradients (HOG). Evaluation shows that its performance and low memory requirement perfectly matches low cost and memory constrained embedded devices.
设计普适智能摄像机网络(SCNs)的最大努力之一是在资源受限的嵌入式设备上实现复杂且计算密集型的计算机视觉算法。对于低级处理,FPGA器件是很好的候选器件,因为它们支持具有高数据吞吐量的大规模和细粒度数据并行。然而,如果fpga提供了一种方法来满足实时执行的严格限制,它们的开发通常需要大量的算法重新制定。在本文中,我们提出了一个特别适合FPGA实现的基于核的梯度计算模块的重新表述。由此产生的算法可以即时运行,不需要视频缓冲区,并提供恒定的吞吐量。它已被测试并用作执行定向梯度直方图(HOG)提取的应用程序的第一阶段。评估结果表明,该算法的性能和较低的内存需求非常适合低成本和内存受限的嵌入式设备。
{"title":"Parallel image gradient extraction core for FPGA-based smart cameras","authors":"Luca Maggiani, C. Bourrasset, F. Berry, J. Sérot, M. Petracca, C. Salvadori","doi":"10.1145/2789116.2789139","DOIUrl":"https://doi.org/10.1145/2789116.2789139","url":null,"abstract":"One of the biggest efforts in designing pervasive Smart Camera Networks (SCNs) is the implementation of complex and computationally intensive computer vision algorithms on resource constrained embedded devices. For low-level processing FPGA devices are excellent candidates because they support massive and fine grain data parallelism with high data throughput. However, if FPGAs offers a way to meet the stringent constraints of real-time execution, their exploitation often require significant algorithmic reformulations. In this paper, we propose a reformulation of a kernel-based gradient computation module specially suited to FPGA implementations. This resulting algorithm operates on-the-fly, without the need of video buffers and delivers a constant throughput. It has been tested and used as the first stage of an application performing extraction of Histograms of Oriented Gradients (HOG). Evaluation shows that its performance and low memory requirement perfectly matches low cost and memory constrained embedded devices.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122896860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A new 360-degree immersive game controller 全新360度沉浸式游戏控制器
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2802652
Juan Li, B. Goossens, Maarten Slembrouck, Francis Deboeverie, P. Veelaert, H. Aghajan, W. Philips, J. Casar
In this demo we present a novel approach that enables to use mobile devices as six degree-of-freedom (DoF) video game controllers. Our approach uses a combination of built-in accelerometers and a multi-camera system to detect the position and orientation of a mobile device in 3D space. The sensor fusion approach is low-cost, accurate, fast and robust. The proposed system allows users to control games with physical movements instead of button-presses as in traditional game controllers. Thus the proposed game controller provides a more immersive gaming experience, letting the users feel that they are the players in the game instead of controlling the players. Compared to other accelerometer-based game controllers, the proposed system also detects the yaw angle, allowing the controller to work as a pointing device. Another featured strength of this design is the ability to provide 360-degree gaming experiences.
在这个演示中,我们提出了一种新颖的方法,可以使用移动设备作为六自由度(DoF)视频游戏控制器。我们的方法结合了内置加速计和多摄像头系统来检测移动设备在3D空间中的位置和方向。传感器融合方法具有成本低、精度高、速度快、鲁棒性好等特点。该系统允许用户通过物理动作来控制游戏,而不是像传统的游戏控制器那样按下按钮。因此,所提出的游戏控制器提供了更具沉浸感的游戏体验,让用户觉得他们是游戏中的玩家,而不是控制玩家。与其他基于加速计的游戏控制器相比,该系统还可以检测偏航角,使控制器可以作为指向设备工作。这种设计的另一个特点是能够提供360度的游戏体验。
{"title":"A new 360-degree immersive game controller","authors":"Juan Li, B. Goossens, Maarten Slembrouck, Francis Deboeverie, P. Veelaert, H. Aghajan, W. Philips, J. Casar","doi":"10.1145/2789116.2802652","DOIUrl":"https://doi.org/10.1145/2789116.2802652","url":null,"abstract":"In this demo we present a novel approach that enables to use mobile devices as six degree-of-freedom (DoF) video game controllers. Our approach uses a combination of built-in accelerometers and a multi-camera system to detect the position and orientation of a mobile device in 3D space. The sensor fusion approach is low-cost, accurate, fast and robust. The proposed system allows users to control games with physical movements instead of button-presses as in traditional game controllers. Thus the proposed game controller provides a more immersive gaming experience, letting the users feel that they are the players in the game instead of controlling the players. Compared to other accelerometer-based game controllers, the proposed system also detects the yaw angle, allowing the controller to work as a pointing device. Another featured strength of this design is the ability to provide 360-degree gaming experiences.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114983953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
STC-CAM1, IR-visual based smart camera system STC-CAM1,基于红外视觉的智能摄像系统
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2802649
Muhammad Imran, M. O’nils, Victor Kardeby, H. Munir
Safety-critical applications require robust and real-time surveillance. For such applications, a vision sensor alone can give false positive results because of poor lighting conditions, occlusion, or different weather conditions. In this work, a visual sensor is complemented by an infrared thermal sensor which makes the system more resilient in unfavorable situations. In the proposed camera architecture, initial data intensive tasks are performed locally on the sensor node and then compressed data is transmitted to a client device where remaining vision tasks are performed. The proposed camera architecture is demonstrated as a proof-of-concept and it offers a generic architecture with better surveillance while only performing low complexity computations on the resource constrained devices.
安全关键型应用需要强大的实时监控。对于此类应用,由于光照条件差、遮挡或不同的天气条件,单独的视觉传感器可能会产生假阳性结果。在这项工作中,视觉传感器与红外热传感器相辅相成,使系统在不利情况下更具弹性。在提出的相机架构中,初始数据密集型任务在传感器节点上本地执行,然后压缩数据传输到客户端设备,在那里执行剩余的视觉任务。提出的摄像机架构被证明是一个概念验证,它提供了一个通用的架构,具有更好的监控,同时只在资源受限的设备上执行低复杂度的计算。
{"title":"STC-CAM1, IR-visual based smart camera system","authors":"Muhammad Imran, M. O’nils, Victor Kardeby, H. Munir","doi":"10.1145/2789116.2802649","DOIUrl":"https://doi.org/10.1145/2789116.2802649","url":null,"abstract":"Safety-critical applications require robust and real-time surveillance. For such applications, a vision sensor alone can give false positive results because of poor lighting conditions, occlusion, or different weather conditions. In this work, a visual sensor is complemented by an infrared thermal sensor which makes the system more resilient in unfavorable situations. In the proposed camera architecture, initial data intensive tasks are performed locally on the sensor node and then compressed data is transmitted to a client device where remaining vision tasks are performed. The proposed camera architecture is demonstrated as a proof-of-concept and it offers a generic architecture with better surveillance while only performing low complexity computations on the resource constrained devices.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125969054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Person re-identification via efficient inference in fully connected CRF 全连接CRF中基于高效推理的人员再识别
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789134
Jiuqing Wan, Menglin Xing
In this paper, we address the problem of person re-identification problem, i.e., retrieving instances from gallery which are generated by the same person as the given probe image. This is very challenging because the person's appearance usually undergoes significant variations due to changes in illumination, camera angle and view, background clutter, and occlusion over the camera network. In this paper, we assume that the matched gallery images should not only be similar to the probe, but also be similar to each other, under suitable metric. We express this assumption with a fully connected CRF model in which each node corresponds to a gallery and every pair of nodes are connected by an edge. A label variable is associated with each node to indicate whether the corresponding image is from target person. We define unary potential for each node using existing feature calculation and matching techniques, which reflect the similarity between probe and gallery image, and define pairwise potential for each edge in terms of a weighed combination of Gaussian kernels, which encode appearance similarity between pair of gallery images. The specific form of pairwise potential allows us to exploit an efficient inference algorithm to calculate the marginal distribution of each label variable for this dense connected CRF. We show the superiority of our method by applying it to public datasets and comparing with the state of the art.
在本文中,我们解决了人的再识别问题,即从图库中检索由与给定探测图像相同的人生成的实例。这是非常具有挑战性的,因为人的外表通常会由于光照、相机角度和视角、背景杂波和相机网络遮挡的变化而发生重大变化。在本文中,我们假设匹配的图库图像不仅与探头相似,而且在合适的度量下彼此相似。我们用一个完全连接的CRF模型来表达这个假设,其中每个节点对应一个画廊,每对节点由一条边连接。标签变量与每个节点相关联,以指示相应的图像是否来自目标人员。我们使用现有的特征计算和匹配技术定义每个节点的一元势,这反映了探针和图库图像之间的相似性,并根据高斯核的加权组合定义每个边缘的成对势,这编码了对图库图像之间的外观相似性。成对势的特定形式使我们能够利用一种有效的推理算法来计算这种密集连接CRF的每个标签变量的边际分布。我们通过将其应用于公共数据集并与最先进的状态进行比较来显示我们方法的优越性。
{"title":"Person re-identification via efficient inference in fully connected CRF","authors":"Jiuqing Wan, Menglin Xing","doi":"10.1145/2789116.2789134","DOIUrl":"https://doi.org/10.1145/2789116.2789134","url":null,"abstract":"In this paper, we address the problem of person re-identification problem, i.e., retrieving instances from gallery which are generated by the same person as the given probe image. This is very challenging because the person's appearance usually undergoes significant variations due to changes in illumination, camera angle and view, background clutter, and occlusion over the camera network. In this paper, we assume that the matched gallery images should not only be similar to the probe, but also be similar to each other, under suitable metric. We express this assumption with a fully connected CRF model in which each node corresponds to a gallery and every pair of nodes are connected by an edge. A label variable is associated with each node to indicate whether the corresponding image is from target person. We define unary potential for each node using existing feature calculation and matching techniques, which reflect the similarity between probe and gallery image, and define pairwise potential for each edge in terms of a weighed combination of Gaussian kernels, which encode appearance similarity between pair of gallery images. The specific form of pairwise potential allows us to exploit an efficient inference algorithm to calculate the marginal distribution of each label variable for this dense connected CRF. We show the superiority of our method by applying it to public datasets and comparing with the state of the art.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122254019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 9th International Conference on Distributed Smart Cameras
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1