Wearable sensors are being widely used to monitor daily human activities and vital signs. Accelerometer-based step counters are commonly available, especially after being integrated into smartphones and smart watches. Moreover, accelerometer data is also used to measure step length and frequency for indoor positioning systems. Yet, accelerometer-based algorithms are prone to over-counting, since they also count other routine movements, including movements of the phone, as steps. In addition, when users walk really slowly, or when they stop and start walking again, the accelerometer-based counting becomes unreliable. Since accurate step detection is very important for indoor positioning systems, more precise alternatives are needed for step detection and counting. In this paper, we present a robust and reliable method for counting foot steps using videos captured with a Samsung Galaxy® S4 smartphone. The performance of the proposed method is compared with existing accelerometer-based step counters. Experiments have been performed with different subjects carrying five mobile devices simultaneously, including smart phones and watches, at different locations on their body. The results show that camera-based step counting has the lowest average error rate for different users, and is more reliable compared to accelerometer-based counters. In addition, the results show the high sensitivity of the accelerometer-based step counters to the location of the device and high variance in their performance across different users.
{"title":"Robust and reliable step counting by mobile phone cameras","authors":"Koray Ozcan, Senem Velipasalar","doi":"10.1145/2789116.2789120","DOIUrl":"https://doi.org/10.1145/2789116.2789120","url":null,"abstract":"Wearable sensors are being widely used to monitor daily human activities and vital signs. Accelerometer-based step counters are commonly available, especially after being integrated into smartphones and smart watches. Moreover, accelerometer data is also used to measure step length and frequency for indoor positioning systems. Yet, accelerometer-based algorithms are prone to over-counting, since they also count other routine movements, including movements of the phone, as steps. In addition, when users walk really slowly, or when they stop and start walking again, the accelerometer-based counting becomes unreliable. Since accurate step detection is very important for indoor positioning systems, more precise alternatives are needed for step detection and counting. In this paper, we present a robust and reliable method for counting foot steps using videos captured with a Samsung Galaxy® S4 smartphone. The performance of the proposed method is compared with existing accelerometer-based step counters. Experiments have been performed with different subjects carrying five mobile devices simultaneously, including smart phones and watches, at different locations on their body. The results show that camera-based step counting has the lowest average error rate for different users, and is more reliable compared to accelerometer-based counters. In addition, the results show the high sensitivity of the accelerometer-based step counters to the location of the device and high variance in their performance across different users.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130335113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Supriya Sathyanarayana, R. Satzoda, T. Srikanthan, S. Sathyanarayana
Eye state can be used as an important cue to monitor the wellness of a patient. In this paper, we propose a computationally efficient eye state detection technique in the context of patient monitoring. The proposed method uses weighted accumulations of intensity and gradients, along with a color thresholding on a reduced set of pixels to extract the various features of the eye, which in turn are used for inferring the eye state. Additionally, we present a dataset of 2500 images that was created for evaluating the proposed technique. The method was shown to effectively differentiate open, closed and half-closed eye states with an accuracy of 91.3% when evaluated on the dataset. The computational cost of the proposed technique is evaluated and is shown to achieve about 67% savings with respect to the state of art.
{"title":"Compute-efficient eye state detection: algorithm, dataset and evaluations","authors":"Supriya Sathyanarayana, R. Satzoda, T. Srikanthan, S. Sathyanarayana","doi":"10.1145/2789116.2789144","DOIUrl":"https://doi.org/10.1145/2789116.2789144","url":null,"abstract":"Eye state can be used as an important cue to monitor the wellness of a patient. In this paper, we propose a computationally efficient eye state detection technique in the context of patient monitoring. The proposed method uses weighted accumulations of intensity and gradients, along with a color thresholding on a reduced set of pixels to extract the various features of the eye, which in turn are used for inferring the eye state. Additionally, we present a dataset of 2500 images that was created for evaluating the proposed technique. The method was shown to effectively differentiate open, closed and half-closed eye states with an accuracy of 91.3% when evaluated on the dataset. The computational cost of the proposed technique is evaluated and is shown to achieve about 67% savings with respect to the state of art.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129339556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Redondi, L. Baroffio, M. Cesana, M. Tagliasacchi
Visual Sensor Networks consist of several camera nodes that perform analysis tasks, such as object recognition. In many cases camera nodes have overlapping fields of view. Such overlap is typically leveraged in two different ways: (i) to improve the accuracy/quality of the visual analysis task by exploiting multi-view information or (ii) to reduce the consumed energy by applying temporal scheduling techniques among the multiple cameras. In this work, we propose a game theoretic framework based Nash Bargaining Solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results confirm that the proposed scheme is able to improve the network lifetime, with a negligible loss in terms of visual analysis accuracy.
{"title":"Cooperative features extraction in visual sensor networks: a game-theoretic approach","authors":"A. Redondi, L. Baroffio, M. Cesana, M. Tagliasacchi","doi":"10.1145/2789116.2789124","DOIUrl":"https://doi.org/10.1145/2789116.2789124","url":null,"abstract":"Visual Sensor Networks consist of several camera nodes that perform analysis tasks, such as object recognition. In many cases camera nodes have overlapping fields of view. Such overlap is typically leveraged in two different ways: (i) to improve the accuracy/quality of the visual analysis task by exploiting multi-view information or (ii) to reduce the consumed energy by applying temporal scheduling techniques among the multiple cameras. In this work, we propose a game theoretic framework based Nash Bargaining Solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results confirm that the proposed scheme is able to improve the network lifetime, with a negligible loss in terms of visual analysis accuracy.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123634583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This live demo allows ICDSC participants to interact with a system to classify faces into two categories: faces with and without surgical masks. The system assigns a per-person ID through tracking in order to trigger only one alarm for a maskless face across several frames in a video. The tracking system also decreases the false positive rate. The system reaches 5 fps with several faces in VGA images on a conventional laptop. The output of our system provides confidence measures for the mask and maskless face detections, image samples of the faces, and for how many frames faces have been detected or tracked. This information is very useful for offline tests of the system. Our demo is the result of a project in cooperation with an IT company to identify breach protocols in the operating room.
{"title":"Mask and maskless face classification system to detect breach protocols in the operating room","authors":"Adrian Nieto-Rodríguez, M. Mucientes, V. Brea","doi":"10.1145/2789116.2802655","DOIUrl":"https://doi.org/10.1145/2789116.2802655","url":null,"abstract":"This live demo allows ICDSC participants to interact with a system to classify faces into two categories: faces with and without surgical masks. The system assigns a per-person ID through tracking in order to trigger only one alarm for a maskless face across several frames in a video. The tracking system also decreases the false positive rate. The system reaches 5 fps with several faces in VGA images on a conventional laptop. The output of our system provides confidence measures for the mask and maskless face detections, image samples of the faces, and for how many frames faces have been detected or tracked. This information is very useful for offline tests of the system. Our demo is the result of a project in cooperation with an IT company to identify breach protocols in the operating room.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130098389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamed Y. Eldib, Francis Deboeverie, D. V. Haerenborgh, W. Philips, H. Aghajan
Loneliness is a common condition associated with aging and comes with extreme health consequences including decline in physical and mental health, increased mortality and poor living conditions. Detecting and assisting lonely persons is therefore important-especially in the home environment. The current studies analyse the Activities of Daily Living (ADL) usually with the focus on persons living alone, e.g., to detect health deterioration. However, this type of data analysis relies on the assumption of a single person being analysed, and the ADL data analysis becomes less reliable without assessing socialization in seniors for health state assessment and intervention. In this paper, we propose a network of cheap low-resolution visual sensors for the detection of visitors. The visitor analysis starts by visual feature extraction based on foreground/background detection and morphological operations to track the motion patterns in each visual sensor. Then, we utilize the features of the visual sensors to build a Hidden Markov Model (HMM) for the actual detection. Finally, a rule-based classifier is used to compute the number and the duration of visits. We evaluate our framework on a real-life dataset of ten months. The results show a promising visit detection performance when compared to ground truth.
{"title":"Detection of visitors in elderly care using a low-resolution visual sensor network","authors":"Mohamed Y. Eldib, Francis Deboeverie, D. V. Haerenborgh, W. Philips, H. Aghajan","doi":"10.1145/2789116.2789137","DOIUrl":"https://doi.org/10.1145/2789116.2789137","url":null,"abstract":"Loneliness is a common condition associated with aging and comes with extreme health consequences including decline in physical and mental health, increased mortality and poor living conditions. Detecting and assisting lonely persons is therefore important-especially in the home environment. The current studies analyse the Activities of Daily Living (ADL) usually with the focus on persons living alone, e.g., to detect health deterioration. However, this type of data analysis relies on the assumption of a single person being analysed, and the ADL data analysis becomes less reliable without assessing socialization in seniors for health state assessment and intervention. In this paper, we propose a network of cheap low-resolution visual sensors for the detection of visitors. The visitor analysis starts by visual feature extraction based on foreground/background detection and morphological operations to track the motion patterns in each visual sensor. Then, we utilize the features of the visual sensors to build a Hidden Markov Model (HMM) for the actual detection. Finally, a rule-based classifier is used to compute the number and the duration of visits. We evaluate our framework on a real-life dataset of ten months. The results show a promising visit detection performance when compared to ground truth.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115654050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Muhammad Imran, M. O’nils, H. Munir, Benny Thörnberg
Embedded smart camera systems are gaining popularity for a number of real world surveillance applications. However, there are still challenges, i.e. variation in illumination, shadows, occlusion, and weather conditions while employing the vision algorithms in outdoor environments. For safety-critical surveillance applications, the visual sensors can be complemented with beyond-visual-range sensors. This in turn requires analysis, development and modification of existing imaging techniques. In this work, a low complexity background modelling and subtraction technique has been proposed for thermal imagery. The proposed technique has been implemented on Field Programmable Gate Arrays (FPGAs) after in-depth analysis of different sets of images, characterizing poor signal-to-noise ratio challenges, e.g. motion of high frequency background objects, temperature variation and camera jitter etc. The proposed technique dynamically updates the background on pixel level and requires a single frame storage as opposed to existing techniques. The comparison of this approach with two other approaches show that this approach performs better in different environmental conditions. The proposed technique has been modelled in Register Transfer Logic (RTL) and implementation on the latest FPGAs shows that the design requires less than 1 percent logics, 47 percent block RAMs, and consumes 91 mW power consumption on Artix-7 100T FPGA.
{"title":"Low complexity FPGA based background subtraction technique for thermal imagery","authors":"Muhammad Imran, M. O’nils, H. Munir, Benny Thörnberg","doi":"10.1145/2789116.2789121","DOIUrl":"https://doi.org/10.1145/2789116.2789121","url":null,"abstract":"Embedded smart camera systems are gaining popularity for a number of real world surveillance applications. However, there are still challenges, i.e. variation in illumination, shadows, occlusion, and weather conditions while employing the vision algorithms in outdoor environments. For safety-critical surveillance applications, the visual sensors can be complemented with beyond-visual-range sensors. This in turn requires analysis, development and modification of existing imaging techniques. In this work, a low complexity background modelling and subtraction technique has been proposed for thermal imagery. The proposed technique has been implemented on Field Programmable Gate Arrays (FPGAs) after in-depth analysis of different sets of images, characterizing poor signal-to-noise ratio challenges, e.g. motion of high frequency background objects, temperature variation and camera jitter etc. The proposed technique dynamically updates the background on pixel level and requires a single frame storage as opposed to existing techniques. The comparison of this approach with two other approaches show that this approach performs better in different environmental conditions. The proposed technique has been modelled in Register Transfer Logic (RTL) and implementation on the latest FPGAs shows that the design requires less than 1 percent logics, 47 percent block RAMs, and consumes 91 mW power consumption on Artix-7 100T FPGA.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"127 40","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113939992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Bondi, L. Baroffio, M. Cesana, A. Redondi, M. Tagliasacchi
We present an open-source and flexible framework for building VSN applications on top of low-cost and low-power linux-operated minicomputers. The framework comprises software modules for different types of nodes in the networks (camera, relays, cooperators and sink) in addition to a graphical user interface for controlling the network remotely. The great flexibility of the framework allows to easily implement applications scenarios characterized by different parameters, such as the wireless communication technology (e.g., 802.11, 802.15.4) or the type of data to be transmitted to the sink (image/video or feature-based data). To demonstrate the flexibility of the proposed framework, two representative applications are showcased: object recognition and parking lot monitoring.
{"title":"Open-source and flexible framework for visual sensor networks","authors":"L. Bondi, L. Baroffio, M. Cesana, A. Redondi, M. Tagliasacchi","doi":"10.1145/2789116.2802650","DOIUrl":"https://doi.org/10.1145/2789116.2802650","url":null,"abstract":"We present an open-source and flexible framework for building VSN applications on top of low-cost and low-power linux-operated minicomputers. The framework comprises software modules for different types of nodes in the networks (camera, relays, cooperators and sink) in addition to a graphical user interface for controlling the network remotely. The great flexibility of the framework allows to easily implement applications scenarios characterized by different parameters, such as the wireless communication technology (e.g., 802.11, 802.15.4) or the type of data to be transmitted to the sink (image/video or feature-based data). To demonstrate the flexibility of the proposed framework, two representative applications are showcased: object recognition and parking lot monitoring.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115879005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High level synthesis (HLS) tools can provide significant benefits for implementing image processing algorithms on FPGAs. The higher level (usually C based) representation enables algorithms to be expressed more easily, significantly reducing development times. The higher level also makes design space exploration easier, making it easier to optimise the trade-off between resources and processing speed. However, one danger of using HLS is simply porting existing image processing algorithms onto an FPGA platform. Often, better parallel or pipelined algorithms may be may be designed which are better suited to the FPGA architecture. Examples will be given from image filtering, to connected components analysis, to efficient memory management for 2-D frequency domain based filtering.
{"title":"The advantages and limitations of high level synthesis for FPGA based image processing","authors":"D. Bailey","doi":"10.1145/2789116.2789145","DOIUrl":"https://doi.org/10.1145/2789116.2789145","url":null,"abstract":"High level synthesis (HLS) tools can provide significant benefits for implementing image processing algorithms on FPGAs. The higher level (usually C based) representation enables algorithms to be expressed more easily, significantly reducing development times. The higher level also makes design space exploration easier, making it easier to optimise the trade-off between resources and processing speed. However, one danger of using HLS is simply porting existing image processing algorithms onto an FPGA platform. Often, better parallel or pipelined algorithms may be may be designed which are better suited to the FPGA architecture. Examples will be given from image filtering, to connected components analysis, to efficient memory management for 2-D frequency domain based filtering.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124015753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nyan Bo Bo, Francis Deboeverie, P. Veelaert, W. Philips
Unlike tracking rigid targets, the task of tracking multiple people is very challenging because the appearance and the shape of a person varies depending on the target's location and orientation. This paper presents a new approach to track multiple people with high accuracy using a calibrated monocular camera. Our approach recursively updates the positions of all persons based on the observed foreground image and previously known location of each person. This is done by maximizing the likelihood of observing the foreground image given the positions of all persons. Since the computational complexity of our approach is low, it is possible to run in real time on smart cameras. When a network of multiple smart cameras overseeing the scene is available, local position estimates from smart cameras can be fused to produced more accurate joint position estimates. The performance evaluation of our approach on very challenging video sequences from public datasets shows that our tracker achieves high accuracy. When comparing to other state-of-the-art tracking systems, our method outperforms in terms of Multiple Object Tracking Accuracy (MOTA).
{"title":"Real-time multi-people tracking by greedy likelihood maximization","authors":"Nyan Bo Bo, Francis Deboeverie, P. Veelaert, W. Philips","doi":"10.1145/2789116.2789125","DOIUrl":"https://doi.org/10.1145/2789116.2789125","url":null,"abstract":"Unlike tracking rigid targets, the task of tracking multiple people is very challenging because the appearance and the shape of a person varies depending on the target's location and orientation. This paper presents a new approach to track multiple people with high accuracy using a calibrated monocular camera. Our approach recursively updates the positions of all persons based on the observed foreground image and previously known location of each person. This is done by maximizing the likelihood of observing the foreground image given the positions of all persons. Since the computational complexity of our approach is low, it is possible to run in real time on smart cameras. When a network of multiple smart cameras overseeing the scene is available, local position estimates from smart cameras can be fused to produced more accurate joint position estimates. The performance evaluation of our approach on very challenging video sequences from public datasets shows that our tracker achieves high accuracy. When comparing to other state-of-the-art tracking systems, our method outperforms in terms of Multiple Object Tracking Accuracy (MOTA).","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134545877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Hamid, Surafel Melaku Lakew, M. Pelillo, A. Prati
This paper presents a novel approach to solve data association in multi-camera multi-target object tracking. The main novelty is represented by the first known use of dominant set framework for intra-camera and inter-camera data association. Thanks to the properties of dominant sets, we can treat the data association as a global clustering of the detections (people or other targets) obtained over the whole sequence of frames from all the cameras. In order to handle occlusions, splitting and merging of targets, an efficient out-of-sample extension to dominant sets has been introduced to perform data association between different cameras (inter-camera data association). Experiments carried out on PETS '09 public dataset showed promising performance in terms of accuracy (precision and recall, as well as MOTA) when compared with the state of the art.
{"title":"Using dominant sets for data association in multi-camera tracking","authors":"A. Hamid, Surafel Melaku Lakew, M. Pelillo, A. Prati","doi":"10.1145/2789116.2789126","DOIUrl":"https://doi.org/10.1145/2789116.2789126","url":null,"abstract":"This paper presents a novel approach to solve data association in multi-camera multi-target object tracking. The main novelty is represented by the first known use of dominant set framework for intra-camera and inter-camera data association. Thanks to the properties of dominant sets, we can treat the data association as a global clustering of the detections (people or other targets) obtained over the whole sequence of frames from all the cameras. In order to handle occlusions, splitting and merging of targets, an efficient out-of-sample extension to dominant sets has been introduced to perform data association between different cameras (inter-camera data association). Experiments carried out on PETS '09 public dataset showed promising performance in terms of accuracy (precision and recall, as well as MOTA) when compared with the state of the art.","PeriodicalId":113163,"journal":{"name":"Proceedings of the 9th International Conference on Distributed Smart Cameras","volume":"70 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133810630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}