首页 > 最新文献

Proceedings of the 9th International Conference on Distributed Smart Cameras最新文献

英文 中文
Robust and reliable step counting by mobile phone cameras 稳健可靠的步数计数的手机摄像头
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789120
Koray Ozcan, Senem Velipasalar
Wearable sensors are being widely used to monitor daily human activities and vital signs. Accelerometer-based step counters are commonly available, especially after being integrated into smartphones and smart watches. Moreover, accelerometer data is also used to measure step length and frequency for indoor positioning systems. Yet, accelerometer-based algorithms are prone to over-counting, since they also count other routine movements, including movements of the phone, as steps. In addition, when users walk really slowly, or when they stop and start walking again, the accelerometer-based counting becomes unreliable. Since accurate step detection is very important for indoor positioning systems, more precise alternatives are needed for step detection and counting. In this paper, we present a robust and reliable method for counting foot steps using videos captured with a Samsung Galaxy® S4 smartphone. The performance of the proposed method is compared with existing accelerometer-based step counters. Experiments have been performed with different subjects carrying five mobile devices simultaneously, including smart phones and watches, at different locations on their body. The results show that camera-based step counting has the lowest average error rate for different users, and is more reliable compared to accelerometer-based counters. In addition, the results show the high sensitivity of the accelerometer-based step counters to the location of the device and high variance in their performance across different users.
可穿戴传感器被广泛用于监测人类的日常活动和生命体征。基于加速度计的计步器很常见,特别是在集成到智能手机和智能手表之后。此外,加速度计数据还用于测量室内定位系统的步长和频率。然而,基于加速度计的算法容易计数过多,因为它们也会将其他常规动作(包括手机的动作)算作步数。此外,当用户走得非常慢,或者当他们停下来又重新开始行走时,基于加速度计的计数变得不可靠。由于准确的步进检测对于室内定位系统非常重要,因此需要更精确的步进检测和计数方法。在本文中,我们提出了一种鲁棒且可靠的方法,可以使用三星Galaxy®S4智能手机拍摄的视频来计算步数。将该方法的性能与现有的基于加速度计的步长计数器进行了比较。不同的实验对象在身体的不同位置同时携带五种移动设备,包括智能手机和手表。结果表明,基于摄像机的计步器对不同用户的平均错误率最低,比基于加速度计的计步器更可靠。此外,结果显示基于加速度计的步长计数器对设备位置的高灵敏度,并且在不同用户之间的性能差异很大。
{"title":"Robust and reliable step counting by mobile phone cameras","authors":"Koray Ozcan, Senem Velipasalar","doi":"10.1145/2789116.2789120","DOIUrl":"https://doi.org/10.1145/2789116.2789120","url":null,"abstract":"Wearable sensors are being widely used to monitor daily human activities and vital signs. Accelerometer-based step counters are commonly available, especially after being integrated into smartphones and smart watches. Moreover, accelerometer data is also used to measure step length and frequency for indoor positioning systems. Yet, accelerometer-based algorithms are prone to over-counting, since they also count other routine movements, including movements of the phone, as steps. In addition, when users walk really slowly, or when they stop and start walking again, the accelerometer-based counting becomes unreliable. Since accurate step detection is very important for indoor positioning systems, more precise alternatives are needed for step detection and counting. In this paper, we present a robust and reliable method for counting foot steps using videos captured with a Samsung Galaxy® S4 smartphone. The performance of the proposed method is compared with existing accelerometer-based step counters. Experiments have been performed with different subjects carrying five mobile devices simultaneously, including smart phones and watches, at different locations on their body. The results show that camera-based step counting has the lowest average error rate for different users, and is more reliable compared to accelerometer-based counters. In addition, the results show the high sensitivity of the accelerometer-based step counters to the location of the device and high variance in their performance across different users.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130335113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Compute-efficient eye state detection: algorithm, dataset and evaluations 计算效率高的眼状态检测:算法、数据集和评估
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789144
Supriya Sathyanarayana, R. Satzoda, T. Srikanthan, S. Sathyanarayana
Eye state can be used as an important cue to monitor the wellness of a patient. In this paper, we propose a computationally efficient eye state detection technique in the context of patient monitoring. The proposed method uses weighted accumulations of intensity and gradients, along with a color thresholding on a reduced set of pixels to extract the various features of the eye, which in turn are used for inferring the eye state. Additionally, we present a dataset of 2500 images that was created for evaluating the proposed technique. The method was shown to effectively differentiate open, closed and half-closed eye states with an accuracy of 91.3% when evaluated on the dataset. The computational cost of the proposed technique is evaluated and is shown to achieve about 67% savings with respect to the state of art.
眼睛状态可以作为监测病人健康状况的重要线索。在本文中,我们提出了一种计算效率高的眼状态检测技术。该方法使用强度和梯度的加权累积,以及在减少的像素集上的颜色阈值来提取眼睛的各种特征,这些特征反过来用于推断眼睛的状态。此外,我们还提供了一个包含2500张图像的数据集,用于评估所提出的技术。结果表明,该方法可以有效区分睁眼、闭眼和半闭眼状态,准确率为91.3%。评估了所提议技术的计算成本,并显示相对于现有技术实现约67%的节省。
{"title":"Compute-efficient eye state detection: algorithm, dataset and evaluations","authors":"Supriya Sathyanarayana, R. Satzoda, T. Srikanthan, S. Sathyanarayana","doi":"10.1145/2789116.2789144","DOIUrl":"https://doi.org/10.1145/2789116.2789144","url":null,"abstract":"Eye state can be used as an important cue to monitor the wellness of a patient. In this paper, we propose a computationally efficient eye state detection technique in the context of patient monitoring. The proposed method uses weighted accumulations of intensity and gradients, along with a color thresholding on a reduced set of pixels to extract the various features of the eye, which in turn are used for inferring the eye state. Additionally, we present a dataset of 2500 images that was created for evaluating the proposed technique. The method was shown to effectively differentiate open, closed and half-closed eye states with an accuracy of 91.3% when evaluated on the dataset. The computational cost of the proposed technique is evaluated and is shown to achieve about 67% savings with respect to the state of art.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129339556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cooperative features extraction in visual sensor networks: a game-theoretic approach 视觉传感器网络中的协同特征提取:一种博弈论方法
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789124
A. Redondi, L. Baroffio, M. Cesana, M. Tagliasacchi
Visual Sensor Networks consist of several camera nodes that perform analysis tasks, such as object recognition. In many cases camera nodes have overlapping fields of view. Such overlap is typically leveraged in two different ways: (i) to improve the accuracy/quality of the visual analysis task by exploiting multi-view information or (ii) to reduce the consumed energy by applying temporal scheduling techniques among the multiple cameras. In this work, we propose a game theoretic framework based Nash Bargaining Solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results confirm that the proposed scheme is able to improve the network lifetime, with a negligible loss in terms of visual analysis accuracy.
视觉传感器网络由几个执行分析任务的相机节点组成,例如物体识别。在许多情况下,相机节点有重叠的视场。这种重叠通常以两种不同的方式加以利用:(i)通过利用多视图信息来提高视觉分析任务的准确性/质量;(ii)通过在多个摄像机之间应用时间调度技术来减少消耗的能量。在这项工作中,我们提出了一个基于博弈论框架的纳什议价解决方案,以弥合上述两种方法之间的差距。提出的框架的关键原则是摄像机通过利用互反视场中的冗余来减少分析过程中消耗的能量。实验结果表明,该方案能够提高网络的生存时间,而视觉分析精度的损失可以忽略不计。
{"title":"Cooperative features extraction in visual sensor networks: a game-theoretic approach","authors":"A. Redondi, L. Baroffio, M. Cesana, M. Tagliasacchi","doi":"10.1145/2789116.2789124","DOIUrl":"https://doi.org/10.1145/2789116.2789124","url":null,"abstract":"Visual Sensor Networks consist of several camera nodes that perform analysis tasks, such as object recognition. In many cases camera nodes have overlapping fields of view. Such overlap is typically leveraged in two different ways: (i) to improve the accuracy/quality of the visual analysis task by exploiting multi-view information or (ii) to reduce the consumed energy by applying temporal scheduling techniques among the multiple cameras. In this work, we propose a game theoretic framework based Nash Bargaining Solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results confirm that the proposed scheme is able to improve the network lifetime, with a negligible loss in terms of visual analysis accuracy.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123634583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Mask and maskless face classification system to detect breach protocols in the operating room 面具和无面具面部分类系统,用于检测手术室的破坏协议
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2802655
Adrian Nieto-Rodríguez, M. Mucientes, V. Brea
This live demo allows ICDSC participants to interact with a system to classify faces into two categories: faces with and without surgical masks. The system assigns a per-person ID through tracking in order to trigger only one alarm for a maskless face across several frames in a video. The tracking system also decreases the false positive rate. The system reaches 5 fps with several faces in VGA images on a conventional laptop. The output of our system provides confidence measures for the mask and maskless face detections, image samples of the faces, and for how many frames faces have been detected or tracked. This information is very useful for offline tests of the system. Our demo is the result of a project in cooperation with an IT company to identify breach protocols in the operating room.
这个现场演示允许ICDSC参与者与一个系统进行交互,将面部分为两类:带口罩和不带口罩的面部。系统通过跟踪为每个人分配一个ID,以便在视频的几个帧中只触发一个无面罩面部警报。跟踪系统还降低了误报率。该系统在传统笔记本电脑上的VGA图像中达到5帧/秒。我们系统的输出为掩模和无掩模人脸检测、人脸图像样本以及检测或跟踪人脸的帧数提供了置信度度量。此信息对于系统的离线测试非常有用。我们的演示是与一家IT公司合作的一个项目的结果,该项目旨在识别手术室中的破坏协议。
{"title":"Mask and maskless face classification system to detect breach protocols in the operating room","authors":"Adrian Nieto-Rodríguez, M. Mucientes, V. Brea","doi":"10.1145/2789116.2802655","DOIUrl":"https://doi.org/10.1145/2789116.2802655","url":null,"abstract":"This live demo allows ICDSC participants to interact with a system to classify faces into two categories: faces with and without surgical masks. The system assigns a per-person ID through tracking in order to trigger only one alarm for a maskless face across several frames in a video. The tracking system also decreases the false positive rate. The system reaches 5 fps with several faces in VGA images on a conventional laptop. The output of our system provides confidence measures for the mask and maskless face detections, image samples of the faces, and for how many frames faces have been detected or tracked. This information is very useful for offline tests of the system. Our demo is the result of a project in cooperation with an IT company to identify breach protocols in the operating room.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130098389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Detection of visitors in elderly care using a low-resolution visual sensor network 利用低分辨率视觉传感器网络检测老年人护理中的来访者
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789137
Mohamed Y. Eldib, Francis Deboeverie, D. V. Haerenborgh, W. Philips, H. Aghajan
Loneliness is a common condition associated with aging and comes with extreme health consequences including decline in physical and mental health, increased mortality and poor living conditions. Detecting and assisting lonely persons is therefore important-especially in the home environment. The current studies analyse the Activities of Daily Living (ADL) usually with the focus on persons living alone, e.g., to detect health deterioration. However, this type of data analysis relies on the assumption of a single person being analysed, and the ADL data analysis becomes less reliable without assessing socialization in seniors for health state assessment and intervention. In this paper, we propose a network of cheap low-resolution visual sensors for the detection of visitors. The visitor analysis starts by visual feature extraction based on foreground/background detection and morphological operations to track the motion patterns in each visual sensor. Then, we utilize the features of the visual sensors to build a Hidden Markov Model (HMM) for the actual detection. Finally, a rule-based classifier is used to compute the number and the duration of visits. We evaluate our framework on a real-life dataset of ten months. The results show a promising visit detection performance when compared to ground truth.
孤独是一种与衰老相关的常见状况,并会带来极端的健康后果,包括身心健康下降、死亡率上升和生活条件恶劣。因此,发现并帮助孤独的人是很重要的——尤其是在家庭环境中。目前的研究分析日常生活活动(ADL),通常侧重于单独生活的人,例如,检测健康恶化。然而,这种类型的数据分析依赖于被分析的单个人的假设,如果不评估老年人在健康状况评估和干预方面的社会化,ADL数据分析的可靠性就会降低。在本文中,我们提出了一种廉价的低分辨率视觉传感器网络来检测访客。访客分析从基于前景/背景检测和形态学操作的视觉特征提取开始,跟踪每个视觉传感器的运动模式。然后,我们利用视觉传感器的特征建立隐马尔可夫模型(HMM)进行实际检测。最后,使用基于规则的分类器计算访问次数和持续时间。我们在10个月的真实数据集上评估我们的框架。结果表明,与地面真值相比,该方法具有良好的访问检测性能。
{"title":"Detection of visitors in elderly care using a low-resolution visual sensor network","authors":"Mohamed Y. Eldib, Francis Deboeverie, D. V. Haerenborgh, W. Philips, H. Aghajan","doi":"10.1145/2789116.2789137","DOIUrl":"https://doi.org/10.1145/2789116.2789137","url":null,"abstract":"Loneliness is a common condition associated with aging and comes with extreme health consequences including decline in physical and mental health, increased mortality and poor living conditions. Detecting and assisting lonely persons is therefore important-especially in the home environment. The current studies analyse the Activities of Daily Living (ADL) usually with the focus on persons living alone, e.g., to detect health deterioration. However, this type of data analysis relies on the assumption of a single person being analysed, and the ADL data analysis becomes less reliable without assessing socialization in seniors for health state assessment and intervention. In this paper, we propose a network of cheap low-resolution visual sensors for the detection of visitors. The visitor analysis starts by visual feature extraction based on foreground/background detection and morphological operations to track the motion patterns in each visual sensor. Then, we utilize the features of the visual sensors to build a Hidden Markov Model (HMM) for the actual detection. Finally, a rule-based classifier is used to compute the number and the duration of visits. We evaluate our framework on a real-life dataset of ten months. The results show a promising visit detection performance when compared to ground truth.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115654050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Low complexity FPGA based background subtraction technique for thermal imagery 基于FPGA的低复杂度热图像背景减去技术
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789121
Muhammad Imran, M. O’nils, H. Munir, Benny Thörnberg
Embedded smart camera systems are gaining popularity for a number of real world surveillance applications. However, there are still challenges, i.e. variation in illumination, shadows, occlusion, and weather conditions while employing the vision algorithms in outdoor environments. For safety-critical surveillance applications, the visual sensors can be complemented with beyond-visual-range sensors. This in turn requires analysis, development and modification of existing imaging techniques. In this work, a low complexity background modelling and subtraction technique has been proposed for thermal imagery. The proposed technique has been implemented on Field Programmable Gate Arrays (FPGAs) after in-depth analysis of different sets of images, characterizing poor signal-to-noise ratio challenges, e.g. motion of high frequency background objects, temperature variation and camera jitter etc. The proposed technique dynamically updates the background on pixel level and requires a single frame storage as opposed to existing techniques. The comparison of this approach with two other approaches show that this approach performs better in different environmental conditions. The proposed technique has been modelled in Register Transfer Logic (RTL) and implementation on the latest FPGAs shows that the design requires less than 1 percent logics, 47 percent block RAMs, and consumes 91 mW power consumption on Artix-7 100T FPGA.
嵌入式智能摄像头系统在许多现实世界的监控应用中越来越受欢迎。然而,在室外环境中使用视觉算法仍然存在挑战,即光照、阴影、遮挡和天气条件的变化。对于安全关键监视应用,视觉传感器可以与超视距传感器相辅相成。这反过来又需要分析、发展和修改现有的成像技术。在这项工作中,提出了一种低复杂度的热图像背景建模和减法技术。该技术已在现场可编程门阵列(fpga)上实现,深入分析了不同的图像集,表征了低频背景物体的运动,温度变化和相机抖动等低信噪比挑战。与现有技术相比,该技术在像素级上动态更新背景,并且需要单帧存储。与其他两种方法的比较表明,该方法在不同的环境条件下具有更好的性能。所提出的技术已经在寄存器传输逻辑(RTL)中建模,并且在最新FPGA上的实现表明,该设计需要不到1%的逻辑,47%的块ram,并且在Artix-7 100T FPGA上消耗91 mW的功耗。
{"title":"Low complexity FPGA based background subtraction technique for thermal imagery","authors":"Muhammad Imran, M. O’nils, H. Munir, Benny Thörnberg","doi":"10.1145/2789116.2789121","DOIUrl":"https://doi.org/10.1145/2789116.2789121","url":null,"abstract":"Embedded smart camera systems are gaining popularity for a number of real world surveillance applications. However, there are still challenges, i.e. variation in illumination, shadows, occlusion, and weather conditions while employing the vision algorithms in outdoor environments. For safety-critical surveillance applications, the visual sensors can be complemented with beyond-visual-range sensors. This in turn requires analysis, development and modification of existing imaging techniques. In this work, a low complexity background modelling and subtraction technique has been proposed for thermal imagery. The proposed technique has been implemented on Field Programmable Gate Arrays (FPGAs) after in-depth analysis of different sets of images, characterizing poor signal-to-noise ratio challenges, e.g. motion of high frequency background objects, temperature variation and camera jitter etc. The proposed technique dynamically updates the background on pixel level and requires a single frame storage as opposed to existing techniques. The comparison of this approach with two other approaches show that this approach performs better in different environmental conditions. The proposed technique has been modelled in Register Transfer Logic (RTL) and implementation on the latest FPGAs shows that the design requires less than 1 percent logics, 47 percent block RAMs, and consumes 91 mW power consumption on Artix-7 100T FPGA.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"127 40","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113939992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Open-source and flexible framework for visual sensor networks 开放源代码和灵活的视觉传感器网络框架
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2802650
L. Bondi, L. Baroffio, M. Cesana, A. Redondi, M. Tagliasacchi
We present an open-source and flexible framework for building VSN applications on top of low-cost and low-power linux-operated minicomputers. The framework comprises software modules for different types of nodes in the networks (camera, relays, cooperators and sink) in addition to a graphical user interface for controlling the network remotely. The great flexibility of the framework allows to easily implement applications scenarios characterized by different parameters, such as the wireless communication technology (e.g., 802.11, 802.15.4) or the type of data to be transmitted to the sink (image/video or feature-based data). To demonstrate the flexibility of the proposed framework, two representative applications are showcased: object recognition and parking lot monitoring.
我们提出了一个开源和灵活的框架,用于在低成本和低功耗的linux操作的小型计算机上构建VSN应用程序。除了用于远程控制网络的图形用户界面外,该框架还包括用于网络中不同类型节点(摄像机、中继、协作器和接收器)的软件模块。该框架的巨大灵活性允许轻松实现以不同参数为特征的应用场景,例如无线通信技术(例如,802.11,802.15.4)或要传输到接收器的数据类型(图像/视频或基于特征的数据)。为了展示所提出框架的灵活性,展示了两个具有代表性的应用:物体识别和停车场监控。
{"title":"Open-source and flexible framework for visual sensor networks","authors":"L. Bondi, L. Baroffio, M. Cesana, A. Redondi, M. Tagliasacchi","doi":"10.1145/2789116.2802650","DOIUrl":"https://doi.org/10.1145/2789116.2802650","url":null,"abstract":"We present an open-source and flexible framework for building VSN applications on top of low-cost and low-power linux-operated minicomputers. The framework comprises software modules for different types of nodes in the networks (camera, relays, cooperators and sink) in addition to a graphical user interface for controlling the network remotely. The great flexibility of the framework allows to easily implement applications scenarios characterized by different parameters, such as the wireless communication technology (e.g., 802.11, 802.15.4) or the type of data to be transmitted to the sink (image/video or feature-based data). To demonstrate the flexibility of the proposed framework, two representative applications are showcased: object recognition and parking lot monitoring.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115879005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
The advantages and limitations of high level synthesis for FPGA based image processing 基于FPGA的图像处理高级合成的优点和局限性
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789145
D. Bailey
High level synthesis (HLS) tools can provide significant benefits for implementing image processing algorithms on FPGAs. The higher level (usually C based) representation enables algorithms to be expressed more easily, significantly reducing development times. The higher level also makes design space exploration easier, making it easier to optimise the trade-off between resources and processing speed. However, one danger of using HLS is simply porting existing image processing algorithms onto an FPGA platform. Often, better parallel or pipelined algorithms may be may be designed which are better suited to the FPGA architecture. Examples will be given from image filtering, to connected components analysis, to efficient memory management for 2-D frequency domain based filtering.
高级合成(HLS)工具可以为在fpga上实现图像处理算法提供显著的好处。更高级别(通常是基于C的)表示使算法更容易表达,从而显著减少了开发时间。更高的层次也使设计空间探索更容易,更容易优化资源和处理速度之间的权衡。然而,使用HLS的一个危险是简单地将现有的图像处理算法移植到FPGA平台上。通常,可以设计出更适合FPGA架构的更好的并行或流水线算法。从图像滤波到连接分量分析,再到基于二维频域滤波的高效内存管理,都将给出例子。
{"title":"The advantages and limitations of high level synthesis for FPGA based image processing","authors":"D. Bailey","doi":"10.1145/2789116.2789145","DOIUrl":"https://doi.org/10.1145/2789116.2789145","url":null,"abstract":"High level synthesis (HLS) tools can provide significant benefits for implementing image processing algorithms on FPGAs. The higher level (usually C based) representation enables algorithms to be expressed more easily, significantly reducing development times. The higher level also makes design space exploration easier, making it easier to optimise the trade-off between resources and processing speed. However, one danger of using HLS is simply porting existing image processing algorithms onto an FPGA platform. Often, better parallel or pipelined algorithms may be may be designed which are better suited to the FPGA architecture. Examples will be given from image filtering, to connected components analysis, to efficient memory management for 2-D frequency domain based filtering.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124015753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Real-time multi-people tracking by greedy likelihood maximization 基于贪婪似然最大化的实时多人跟踪
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789125
Nyan Bo Bo, Francis Deboeverie, P. Veelaert, W. Philips
Unlike tracking rigid targets, the task of tracking multiple people is very challenging because the appearance and the shape of a person varies depending on the target's location and orientation. This paper presents a new approach to track multiple people with high accuracy using a calibrated monocular camera. Our approach recursively updates the positions of all persons based on the observed foreground image and previously known location of each person. This is done by maximizing the likelihood of observing the foreground image given the positions of all persons. Since the computational complexity of our approach is low, it is possible to run in real time on smart cameras. When a network of multiple smart cameras overseeing the scene is available, local position estimates from smart cameras can be fused to produced more accurate joint position estimates. The performance evaluation of our approach on very challenging video sequences from public datasets shows that our tracker achieves high accuracy. When comparing to other state-of-the-art tracking systems, our method outperforms in terms of Multiple Object Tracking Accuracy (MOTA).
与跟踪刚性目标不同,跟踪多人的任务非常具有挑战性,因为一个人的外观和形状会随着目标的位置和方向而变化。本文提出了一种利用标定单目摄像机高精度跟踪多人的新方法。我们的方法基于观察到的前景图像和每个人之前已知的位置递归地更新所有人的位置。这是通过在给定所有人位置的情况下最大化观察前景图像的可能性来实现的。由于我们的方法的计算复杂度很低,因此可以在智能相机上实时运行。当多个智能摄像头组成的网络监控现场时,来自智能摄像头的局部位置估计可以融合在一起,从而产生更准确的联合位置估计。对来自公共数据集的极具挑战性的视频序列的性能评估表明,我们的跟踪器达到了很高的精度。与其他先进的跟踪系统相比,我们的方法在多目标跟踪精度(MOTA)方面表现出色。
{"title":"Real-time multi-people tracking by greedy likelihood maximization","authors":"Nyan Bo Bo, Francis Deboeverie, P. Veelaert, W. Philips","doi":"10.1145/2789116.2789125","DOIUrl":"https://doi.org/10.1145/2789116.2789125","url":null,"abstract":"Unlike tracking rigid targets, the task of tracking multiple people is very challenging because the appearance and the shape of a person varies depending on the target's location and orientation. This paper presents a new approach to track multiple people with high accuracy using a calibrated monocular camera. Our approach recursively updates the positions of all persons based on the observed foreground image and previously known location of each person. This is done by maximizing the likelihood of observing the foreground image given the positions of all persons. Since the computational complexity of our approach is low, it is possible to run in real time on smart cameras. When a network of multiple smart cameras overseeing the scene is available, local position estimates from smart cameras can be fused to produced more accurate joint position estimates. The performance evaluation of our approach on very challenging video sequences from public datasets shows that our tracker achieves high accuracy. When comparing to other state-of-the-art tracking systems, our method outperforms in terms of Multiple Object Tracking Accuracy (MOTA).","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134545877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Using dominant sets for data association in multi-camera tracking 基于优势集的多相机跟踪数据关联
Pub Date : 2015-09-08 DOI: 10.1145/2789116.2789126
A. Hamid, Surafel Melaku Lakew, M. Pelillo, A. Prati
This paper presents a novel approach to solve data association in multi-camera multi-target object tracking. The main novelty is represented by the first known use of dominant set framework for intra-camera and inter-camera data association. Thanks to the properties of dominant sets, we can treat the data association as a global clustering of the detections (people or other targets) obtained over the whole sequence of frames from all the cameras. In order to handle occlusions, splitting and merging of targets, an efficient out-of-sample extension to dominant sets has been introduced to perform data association between different cameras (inter-camera data association). Experiments carried out on PETS '09 public dataset showed promising performance in terms of accuracy (precision and recall, as well as MOTA) when compared with the state of the art.
提出了一种解决多相机多目标跟踪中数据关联问题的新方法。主要的新颖之处在于首次使用相机内和相机间数据关联的优势集框架。由于优势集的性质,我们可以将数据关联视为所有摄像机的整个帧序列上获得的检测(人或其他目标)的全局聚类。为了处理目标的遮挡、分裂和合并,引入了一种有效的优势集的样本外扩展来执行不同相机之间的数据关联(相机间数据关联)。在PETS的09公共数据集上进行的实验显示,与目前的技术水平相比,在准确性(精度和召回率以及MOTA)方面表现良好。
{"title":"Using dominant sets for data association in multi-camera tracking","authors":"A. Hamid, Surafel Melaku Lakew, M. Pelillo, A. Prati","doi":"10.1145/2789116.2789126","DOIUrl":"https://doi.org/10.1145/2789116.2789126","url":null,"abstract":"This paper presents a novel approach to solve data association in multi-camera multi-target object tracking. The main novelty is represented by the first known use of dominant set framework for intra-camera and inter-camera data association. Thanks to the properties of dominant sets, we can treat the data association as a global clustering of the detections (people or other targets) obtained over the whole sequence of frames from all the cameras. In order to handle occlusions, splitting and merging of targets, an efficient out-of-sample extension to dominant sets has been introduced to perform data association between different cameras (inter-camera data association). Experiments carried out on PETS '09 public dataset showed promising performance in terms of accuracy (precision and recall, as well as MOTA) when compared with the state of the art.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"70 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133810630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the 9th International Conference on Distributed Smart Cameras
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1