Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466324
A. Buckler
Proliferation of data forthcoming from new ways to understand biology as well as the increasing interest in personalized treatments for smaller patient segments in turn requires new capabilities for the assessment of therapy response. While advances in imaging technology over the last decade may present opportunity to meet these needs, deployment of qualified imaging biomarkers lags the apparent technology capabilities allowed by these advances. The lack of consensus methods and qualification evidence needed for large-scale multi-center trials, and in turn the standardization that allows them, are widely acknowledged to be the limiting factor. The current fragmentation in imaging vendor offerings, coupled by the independent activities of individual biopharmaceutical companies and their CROs, may stand in the way of the greater opportunity were these efforts to be drawn together. An integrative activity wherein stakeholders collaborate on the methodology and activity of qualifying mature candidate biomarkers, while encouraging innovative development of new biomarkers with the promise for effective qualification as they mature on the one hand, and innovative therapy development with the ability to rely on cost-effective qualification of biomarkers on the other, may provide a more productive overall structure for the collective industries. This report updates the status of a cross-stakeholder effort to qualify imaging biomarkers, using Volumetric CT as an example that establishes a procedural template that can be applied to other biomarkers. A preliminary report of the Quantitative Imaging Biomarkers Alliance (QIBA) activity was presented at the DIA meeting in October 2008 [1]. The clinical context in Lung Cancer and a methodology for approaching the qualification of volumetric CT as a biomarker of response has been reported [2,3]. The long-term goal of the committee is to qualify the quantification of anatomical structures with x-ray computed tomography (CT) as biomarkers. The group selected solid tumors of the chest in subjects with lung cancer as its first case-in-point. The rationale for selecting lung cancer included the fact that the systems engineering analysis, groundwork, profile claims documents, and roadmaps for biomarker qualification in this specific setting can serve as a general paradigm for eventually qualifying other imaging biomarkers as well. This report addresses the question of how this procedural template is applied and how it may be used for other quantitative imaging biomarkers as a methodology.
{"title":"A procedural template for the qualification of imaging as a biomarker, using volumetric CT as an example","authors":"A. Buckler","doi":"10.1109/AIPR.2009.5466324","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466324","url":null,"abstract":"Proliferation of data forthcoming from new ways to understand biology as well as the increasing interest in personalized treatments for smaller patient segments in turn requires new capabilities for the assessment of therapy response. While advances in imaging technology over the last decade may present opportunity to meet these needs, deployment of qualified imaging biomarkers lags the apparent technology capabilities allowed by these advances. The lack of consensus methods and qualification evidence needed for large-scale multi-center trials, and in turn the standardization that allows them, are widely acknowledged to be the limiting factor. The current fragmentation in imaging vendor offerings, coupled by the independent activities of individual biopharmaceutical companies and their CROs, may stand in the way of the greater opportunity were these efforts to be drawn together. An integrative activity wherein stakeholders collaborate on the methodology and activity of qualifying mature candidate biomarkers, while encouraging innovative development of new biomarkers with the promise for effective qualification as they mature on the one hand, and innovative therapy development with the ability to rely on cost-effective qualification of biomarkers on the other, may provide a more productive overall structure for the collective industries. This report updates the status of a cross-stakeholder effort to qualify imaging biomarkers, using Volumetric CT as an example that establishes a procedural template that can be applied to other biomarkers. A preliminary report of the Quantitative Imaging Biomarkers Alliance (QIBA) activity was presented at the DIA meeting in October 2008 [1]. The clinical context in Lung Cancer and a methodology for approaching the qualification of volumetric CT as a biomarker of response has been reported [2,3]. The long-term goal of the committee is to qualify the quantification of anatomical structures with x-ray computed tomography (CT) as biomarkers. The group selected solid tumors of the chest in subjects with lung cancer as its first case-in-point. The rationale for selecting lung cancer included the fact that the systems engineering analysis, groundwork, profile claims documents, and roadmaps for biomarker qualification in this specific setting can serve as a general paradigm for eventually qualifying other imaging biomarkers as well. This report addresses the question of how this procedural template is applied and how it may be used for other quantitative imaging biomarkers as a methodology.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134300663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466283
R. Bonneau
Recently there has been much interest in design of systems to manage signal and noise environments adaptively with resource strategies that are optimized for detection performance. These approaches are particularly important for scenarios where the noise environment can change and therefore affect the amount of resources necessary for detection and estimation. A common way to manage these tradeoffs uses a min-max estimation strategy to handle the worst case signal and noise distribution and set resources and detection thresholds accordingly. In many of these approaches however, the difficulty of setting the number of resources to achieve the min-max bound for the worst case probability are difficult to gauge. We propose an approach that considers resource allocation as a problem in sparse approximation. The idea is to measure the current probability distribution and adapt to stay within the worst case bound while using the minimum number of resources necessary.
{"title":"Adaptive coherence conditioning","authors":"R. Bonneau","doi":"10.1109/AIPR.2009.5466283","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466283","url":null,"abstract":"Recently there has been much interest in design of systems to manage signal and noise environments adaptively with resource strategies that are optimized for detection performance. These approaches are particularly important for scenarios where the noise environment can change and therefore affect the amount of resources necessary for detection and estimation. A common way to manage these tradeoffs uses a min-max estimation strategy to handle the worst case signal and noise distribution and set resources and detection thresholds accordingly. In many of these approaches however, the difficulty of setting the number of resources to achieve the min-max bound for the worst case probability are difficult to gauge. We propose an approach that considers resource allocation as a problem in sparse approximation. The idea is to measure the current probability distribution and adapt to stay within the worst case bound while using the minimum number of resources necessary.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129545229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466313
Andrew Copeland, R. Mangoubi, Mukund Desai, S. Mitter, A. Malek
A noise adaptive Cusum-based algorithm for determining the arrival times of contrast at each spatial location in a 2D time sequence of angiographic images is presented. We employ a new group-wise registration algorithm to remove the effect of patient motions during the acquisition process. By using the registered image the proposed arrival time provides accurate results without relying on a priori knowledge of the shape of the time series at each location or even on the time series at each location having the same shape under translation.
{"title":"Enhancing the surgeons reality : Smart visualization of bolus time of arrival and blood flow anomalies from time lapse series for safety and speed of cerebrovascular surgery","authors":"Andrew Copeland, R. Mangoubi, Mukund Desai, S. Mitter, A. Malek","doi":"10.1109/AIPR.2009.5466313","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466313","url":null,"abstract":"A noise adaptive Cusum-based algorithm for determining the arrival times of contrast at each spatial location in a 2D time sequence of angiographic images is presented. We employ a new group-wise registration algorithm to remove the effect of patient motions during the acquisition process. By using the registered image the proposed arrival time provides accurate results without relying on a priori knowledge of the shape of the time series at each location or even on the time series at each location having the same shape under translation.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121326400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466311
Mukund Desai, R. Mangoubi, J. Danko, L. Aiello, L. Aiello, Jennifer K. Sun, J. Cavallerano
We present a novel approach for detecting and analyzing Retinal Venous Caliber Abnormalities (VCAB). We use 1) the noise adaptive Matrix Edge Field variational energy functional formulation for simultaneous smoothing and segmentation, and 2) analyze its output, the edge field, to demonstrate the ability to recognize the deformations. This contribution is one step towards a wider vision of establishing an automated, low cost, easy to use classification and decision support system for rapid, accurate, and consistent retinal heath monitoring and lesion detection and classification.1
{"title":"Retinal venous caliber abnormality: Detection and analysis using matrix edge fields-based simultaneous smoothing and segmentation","authors":"Mukund Desai, R. Mangoubi, J. Danko, L. Aiello, L. Aiello, Jennifer K. Sun, J. Cavallerano","doi":"10.1109/AIPR.2009.5466311","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466311","url":null,"abstract":"We present a novel approach for detecting and analyzing Retinal Venous Caliber Abnormalities (VCAB). We use 1) the noise adaptive Matrix Edge Field variational energy functional formulation for simultaneous smoothing and segmentation, and 2) analyze its output, the edge field, to demonstrate the ability to recognize the deformations. This contribution is one step towards a wider vision of establishing an automated, low cost, easy to use classification and decision support system for rapid, accurate, and consistent retinal heath monitoring and lesion detection and classification.1","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"680 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116108282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466293
A. Godil
With recent advances in 3D modeling and scanning technologies, large number of 3D models are created and stored in different databases. This has created an impetus to develop effective 3D shape analysis and 3D shape retrieval algorithms for these domains. This has made the field of 3D shape analysis and retrieval become an active area of research in the 3D community. In this paper we will survey few applications where 3D shape analysis and retrieval has been applied effectively. The main applications we have discussed are: 3D human shape analysis; CAD/CAM applications; structural bioinformatics; and other applications.
{"title":"Applications of 3D shape analysis and retrieval","authors":"A. Godil","doi":"10.1109/AIPR.2009.5466293","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466293","url":null,"abstract":"With recent advances in 3D modeling and scanning technologies, large number of 3D models are created and stored in different databases. This has created an impetus to develop effective 3D shape analysis and 3D shape retrieval algorithms for these domains. This has made the field of 3D shape analysis and retrieval become an active area of research in the 3D community. In this paper we will survey few applications where 3D shape analysis and retrieval has been applied effectively. The main applications we have discussed are: 3D human shape analysis; CAD/CAM applications; structural bioinformatics; and other applications.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117132738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466295
Chen-Ping Yu, C. Duffy, W. Page, R. Gaborski
Biologically inspired approaches are an alternative to conventional engineering approaches when developing complex algorithms for intelligent systems. In this paper, we present a novel approach to the computational modeling of primate cortical neurons in the dorsal medial superior temporal area (MSTd). Our approach is based-on a spatially distributed mixture of Gaussians, where MST's primary function is detecting self-motion from optic flow stimulus. Each biological neuron was modeled using a genetic algorithm to determine the parameters of the mixture of Gaussians, resulting in firing rate responses that accurately match the observed responses of the corresponding biological neurons. We also present the possibility of applying the trained models to machine vision as part of a simple dorsal stream processing model for self-motion detection, which has applications to motion analysis and unmanned vehicle navigation.
{"title":"Computational model of cortical neuronal receptive fields for self-motion perception","authors":"Chen-Ping Yu, C. Duffy, W. Page, R. Gaborski","doi":"10.1109/AIPR.2009.5466295","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466295","url":null,"abstract":"Biologically inspired approaches are an alternative to conventional engineering approaches when developing complex algorithms for intelligent systems. In this paper, we present a novel approach to the computational modeling of primate cortical neurons in the dorsal medial superior temporal area (MSTd). Our approach is based-on a spatially distributed mixture of Gaussians, where MST's primary function is detecting self-motion from optic flow stimulus. Each biological neuron was modeled using a genetic algorithm to determine the parameters of the mixture of Gaussians, resulting in firing rate responses that accurately match the observed responses of the corresponding biological neurons. We also present the possibility of applying the trained models to machine vision as part of a simple dorsal stream processing model for self-motion detection, which has applications to motion analysis and unmanned vehicle navigation.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129970066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466314
Nathan Jacobs, Richard Souvenir, Robert Pless
The web has an enormous collection of live cameras that image parks, roads, cities, beaches, mountains, buildings, parking lots. There are a wide variety of problems that could effectively use this massively distributed, scalable, and already existing camera network. To move towards this goal, this paper discusses ongoing research with the AMOS (Archive of Many Outdoor Scenes) database, which includes images from 1000 cameras captured every half hour over the last 3 years. In particular, we offer (1) algorithms for geo-locating and calibrating these cameras just from image data, (2) a set of tools to annotate parts of the scene in view (e.g. ground plane, roads, sky, trees), and (3) advances in algorithms to automatically infer weather information (e.g. wind-speed, vapor pressure) from image data alone.
{"title":"Passive vision: The global webcam imaging network","authors":"Nathan Jacobs, Richard Souvenir, Robert Pless","doi":"10.1109/AIPR.2009.5466314","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466314","url":null,"abstract":"The web has an enormous collection of live cameras that image parks, roads, cities, beaches, mountains, buildings, parking lots. There are a wide variety of problems that could effectively use this massively distributed, scalable, and already existing camera network. To move towards this goal, this paper discusses ongoing research with the AMOS (Archive of Many Outdoor Scenes) database, which includes images from 1000 cameras captured every half hour over the last 3 years. In particular, we offer (1) algorithms for geo-locating and calibrating these cameras just from image data, (2) a set of tools to annotate parts of the scene in view (e.g. ground plane, roads, sky, trees), and (3) advances in algorithms to automatically infer weather information (e.g. wind-speed, vapor pressure) from image data alone.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130384271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466303
D. Tahmoush, J. Silvious
Detecting humans and distinguishing them from natural fauna is an important issue in border security applications. In particular, it is important to detect and classify people who are walking in remote locations and transmit back detections over extended periods at a low cost and with minimal maintenance. Our simulation and measurement work has been relatively successful in providing a qualitative guide to improving our analysis, and has produced a reasonable model for studying signatures using radar micro-Doppler. This paper presents data on humans and animals at multiple angles and directions of motion, as well as features that can be extracted from radar data for the classification as animal versus human.
{"title":"Remote detection of humans and animals","authors":"D. Tahmoush, J. Silvious","doi":"10.1109/AIPR.2009.5466303","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466303","url":null,"abstract":"Detecting humans and distinguishing them from natural fauna is an important issue in border security applications. In particular, it is important to detect and classify people who are walking in remote locations and transmit back detections over extended periods at a low cost and with minimal maintenance. Our simulation and measurement work has been relatively successful in providing a qualitative guide to improving our analysis, and has produced a reasonable model for studying signatures using radar micro-Doppler. This paper presents data on humans and animals at multiple angles and directions of motion, as well as features that can be extracted from radar data for the classification as animal versus human.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131798584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466305
J. Scott, M. Pusateri
Making the transition between digital video imagery acquired by a focal plane array and imagery useful to a human operator is not a simple process. The focal plane array “sees” the world in a fundamentally different way than the human eye. Gamma correction has been historically used to help bridge the gap. The gamma correction process is a non-linear mapping of intensity from input to output where the parameter gamma can be adjusted to improve the imagery's visual appeal. In analog video systems, gamma correction is performed with analog circuitry and is adjusted manually. With a digital video stream, gamma correction can be provided using mathematical operations in a digital circuit. In addition to manual control, gamma correction can also be automatically adjusted to compensate for changes in the scene.
{"title":"Towards real-time hardware gamma correction for dynamic contrast enhancement","authors":"J. Scott, M. Pusateri","doi":"10.1109/AIPR.2009.5466305","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466305","url":null,"abstract":"Making the transition between digital video imagery acquired by a focal plane array and imagery useful to a human operator is not a simple process. The focal plane array “sees” the world in a fundamentally different way than the human eye. Gamma correction has been historically used to help bridge the gap. The gamma correction process is a non-linear mapping of intensity from input to output where the parameter gamma can be adjusted to improve the imagery's visual appeal. In analog video systems, gamma correction is performed with analog circuitry and is adjusted manually. With a digital video stream, gamma correction can be provided using mathematical operations in a digital circuit. In addition to manual control, gamma correction can also be automatically adjusted to compensate for changes in the scene.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131645999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2009-10-01DOI: 10.1109/AIPR.2009.5466321
P. Cho, Soonmin Bae, F. Durand
We extend recent automated computer vision algorithms to reconstruct the global three-dimensional structures for photos and videos shot at fixed points in outdoor city environments. Mosaics of digital stills and embedded videos are georegistered by matching a few of their 2D features with 3D counterparts in aerial ladar imagery. Once image planes are aligned with world maps, abstract urban knowledge can propagate from the latter into the former. We project geotagged annotations from a 3D map into a 2D video stream and demonstrate their tracking buildings and streets in a clip with significant panning motion. We also present an interactive tool which enables users to select city features of interest in video frames and retrieve their geocoordinates and ranges. Implications of this work for future augmented reality systems based upon mobile smart phones are discussed.
{"title":"Image-based querying of urban photos and videos","authors":"P. Cho, Soonmin Bae, F. Durand","doi":"10.1109/AIPR.2009.5466321","DOIUrl":"https://doi.org/10.1109/AIPR.2009.5466321","url":null,"abstract":"We extend recent automated computer vision algorithms to reconstruct the global three-dimensional structures for photos and videos shot at fixed points in outdoor city environments. Mosaics of digital stills and embedded videos are georegistered by matching a few of their 2D features with 3D counterparts in aerial ladar imagery. Once image planes are aligned with world maps, abstract urban knowledge can propagate from the latter into the former. We project geotagged annotations from a 3D map into a 2D video stream and demonstrate their tracking buildings and streets in a clip with significant panning motion. We also present an interactive tool which enables users to select city features of interest in video frames and retrieve their geocoordinates and ranges. Implications of this work for future augmented reality systems based upon mobile smart phones are discussed.","PeriodicalId":266025,"journal":{"name":"2009 IEEE Applied Imagery Pattern Recognition Workshop (AIPR 2009)","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115267287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}