Pub Date : 2008-07-15DOI: 10.1109/CVPRW.2008.4562991
Ritwik K. Kumar, Angelos Barmpoutis, B. Vemuri, P. Carney, T. Mareci
In this paper we propose a method for reconstructing the Diffusion Weighted Magnetic Resonance (DW-MR) signal at each lattice point using a novel continuous mixture of von Mises-Fisher distribution functions. Unlike most existing methods, neither does this model assume a fixed functional form for the MR signal attenuation (e.g. 2nd or 4th order tensor) nor does it arbitrarily fix important mixture parameters like the number of components. We show that this continuous mixture has a closed form expression and leads to a linear system which can be easily solved. Through extensive experimentation with synthetic data we show that this technique outperforms various other state-of-the-art techniques in resolving fiber crossings. Finally, we demonstrate the effectiveness of this method using real DW-MRI data from rat brain and optic chiasm.
{"title":"Multi-fiber reconstruction from DW-MRI using a continuous mixture of von Mises-Fisher distributions","authors":"Ritwik K. Kumar, Angelos Barmpoutis, B. Vemuri, P. Carney, T. Mareci","doi":"10.1109/CVPRW.2008.4562991","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562991","url":null,"abstract":"In this paper we propose a method for reconstructing the Diffusion Weighted Magnetic Resonance (DW-MR) signal at each lattice point using a novel continuous mixture of von Mises-Fisher distribution functions. Unlike most existing methods, neither does this model assume a fixed functional form for the MR signal attenuation (e.g. 2nd or 4th order tensor) nor does it arbitrarily fix important mixture parameters like the number of components. We show that this continuous mixture has a closed form expression and leads to a linear system which can be easily solved. Through extensive experimentation with synthetic data we show that this technique outperforms various other state-of-the-art techniques in resolving fiber crossings. Finally, we demonstrate the effectiveness of this method using real DW-MRI data from rat brain and optic chiasm.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133412094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4562957
A. Dahl, H. Aanæs
Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction, vocabulary building, and searching with a query image. It is important to keep the computational cost low through all steps. In this paper we focus on the efficiency of the technique. To do that we substantially reduce the dimensionality of the features by the use of PCA and addition of color. Building of the visual vocabulary is typically done using k-means. We investigate a clustering algorithm based on the leader follower principle (LF-clustering), in which the number of clusters is not fixed. The adaptive nature of LF-clustering is shown to improve the quality of the visual vocabulary using this. In the query step, features from the query image are assigned to the visual vocabulary. The dimensionality reduction enables us to do exact feature labeling using kD-tree, instead of approximate approaches normally used. Despite the dimensionality reduction to between 6 and 15 dimensions we obtain improved results compared to the traditional bag-of-words approach based on 128 dimensional SIFT feature and k-means clustering.
{"title":"Effective image database search via dimensionality reduction","authors":"A. Dahl, H. Aanæs","doi":"10.1109/CVPRW.2008.4562957","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562957","url":null,"abstract":"Image search using the bag-of-words image representation is investigated further in this paper. This approach has shown promising results for large scale image collections making it relevant for Internet applications. The steps involved in the bag-of-words approach are feature extraction, vocabulary building, and searching with a query image. It is important to keep the computational cost low through all steps. In this paper we focus on the efficiency of the technique. To do that we substantially reduce the dimensionality of the features by the use of PCA and addition of color. Building of the visual vocabulary is typically done using k-means. We investigate a clustering algorithm based on the leader follower principle (LF-clustering), in which the number of clusters is not fixed. The adaptive nature of LF-clustering is shown to improve the quality of the visual vocabulary using this. In the query step, features from the query image are assigned to the visual vocabulary. The dimensionality reduction enables us to do exact feature labeling using kD-tree, instead of approximate approaches normally used. Despite the dimensionality reduction to between 6 and 15 dimensions we obtain improved results compared to the traditional bag-of-words approach based on 128 dimensional SIFT feature and k-means clustering.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124422969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563152
Kota Yamaguchi, Yoshihiro Watanabe, T. Komuro, M. Ishikawa
This paper describes an in-depth investigation and implementation of interleaved memory for pixel lookup operations in computer vision. Pixel lookup, mapping between coordinates and pixels, is a common operation in computer vision, but is also a potential bottleneck due to formidable bandwidth requirements for real-time operation. We focus on the acceleration of pixel lookup operations through parallelizing memory banks by interleaving. The key to applying interleaving for pixel lookup is 2D block data partitioning and support for unaligned access. With this optimization of interleaving, pixel lookup operations can output a block of pixels at once without major overhead for unaligned access. An example implementation of our optimized interleaved memory for affine motion tracking shows that the pixel lookup operations can achieve 12.8 Gbps for random lookup of a 4x4 size block of 8-bit pixels under 100 MHz operation. Interleaving can be a cost-effective solution for fast pixel lookup in embedded computer vision.
{"title":"Interleaved pixel lookup for embedded computer vision","authors":"Kota Yamaguchi, Yoshihiro Watanabe, T. Komuro, M. Ishikawa","doi":"10.1109/CVPRW.2008.4563152","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563152","url":null,"abstract":"This paper describes an in-depth investigation and implementation of interleaved memory for pixel lookup operations in computer vision. Pixel lookup, mapping between coordinates and pixels, is a common operation in computer vision, but is also a potential bottleneck due to formidable bandwidth requirements for real-time operation. We focus on the acceleration of pixel lookup operations through parallelizing memory banks by interleaving. The key to applying interleaving for pixel lookup is 2D block data partitioning and support for unaligned access. With this optimization of interleaving, pixel lookup operations can output a block of pixels at once without major overhead for unaligned access. An example implementation of our optimized interleaved memory for affine motion tracking shows that the pixel lookup operations can achieve 12.8 Gbps for random lookup of a 4x4 size block of 8-bit pixels under 100 MHz operation. Interleaving can be a cost-effective solution for fast pixel lookup in embedded computer vision.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114644999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563059
A. Leykin, R. Hammoud
Knowing the visual attention field of a monitored subject is of great value for many applications including surveillance and marketing. This paper proposes first to track peoplepsilas bodies, and then estimates visual attention field for each human using head pose information. The proposed head pose technique aims at estimating the yaw angle only. The method is shown to operate on monocular color camera sequences and is further refined with the data from a thermal sensor. In typical monocular tracking sequences the resolution of the head is very low and parts of the head are occluded with the face often invisible to the camera. We propose a method of combining a skin color detector with the direction of motion in a probabilistic way. We show how head profile obtained from the thermal sequence can be used to further improve the result.
{"title":"Real-time estimation of human attention field in LWIR and color surveillance videos","authors":"A. Leykin, R. Hammoud","doi":"10.1109/CVPRW.2008.4563059","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563059","url":null,"abstract":"Knowing the visual attention field of a monitored subject is of great value for many applications including surveillance and marketing. This paper proposes first to track peoplepsilas bodies, and then estimates visual attention field for each human using head pose information. The proposed head pose technique aims at estimating the yaw angle only. The method is shown to operate on monocular color camera sequences and is further refined with the data from a thermal sensor. In typical monocular tracking sequences the resolution of the head is very low and parts of the head are occluded with the face often invisible to the camera. We propose a method of combining a skin color detector with the direction of motion in a probabilistic way. We show how head profile obtained from the thermal sequence can be used to further improve the result.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"128 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115831059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563159
A. Kolb, E. Barth, R. Koch
A growing number of applications depend on accurate and fast 3D scene analysis. Examples are object recognition, collision prevention, 3D modeling, mixed reality, and gesture recognition. The estimation of a range map by image analysis or laser scan techniques is still a time- consuming and expensive part of such systems. A lower-priced, fast and robust alternative for distance measurements are time-of-flight (ToF) cameras. Recently, significant improvements have been made in order to achieve low-cost and compact ToF-devices, that have the potential to revolutionize many fields of research, including computer vision, computer graphics and human computer interaction (HCI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become "ubiquitous geometry devices" for gaming, web-conferencing, and numerous other applications. This paper will give an account of some recent developments in ToF-technology and will discuss applications of this technology for vision, graphics, and HCI.
{"title":"ToF-sensors: New dimensions for realism and interactivity","authors":"A. Kolb, E. Barth, R. Koch","doi":"10.1109/CVPRW.2008.4563159","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563159","url":null,"abstract":"A growing number of applications depend on accurate and fast 3D scene analysis. Examples are object recognition, collision prevention, 3D modeling, mixed reality, and gesture recognition. The estimation of a range map by image analysis or laser scan techniques is still a time- consuming and expensive part of such systems. A lower-priced, fast and robust alternative for distance measurements are time-of-flight (ToF) cameras. Recently, significant improvements have been made in order to achieve low-cost and compact ToF-devices, that have the potential to revolutionize many fields of research, including computer vision, computer graphics and human computer interaction (HCI). These technologies are starting to have an impact on research and commercial applications. The upcoming generation of ToF sensors, however, will be even more powerful and will have the potential to become \"ubiquitous geometry devices\" for gaming, web-conferencing, and numerous other applications. This paper will give an account of some recent developments in ToF-technology and will discuss applications of this technology for vision, graphics, and HCI.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124262077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563037
Changchang Wu, F. Fraundorfer, Jan-Michael Frahm, M. Pollefeys
This paper describes a method to efficiently search for 3D models in a city-scale database and to compute the camera poses from single query images. The proposed method matches SIFT features (from a single image) to viewpoint invariant patches (VIP) from a 3D model by warping the SIFT features approximately into the orthographic frame of the VIP features. This significantly increases the number of feature correspondences which results in a reliable and robust pose estimation. We also present a 3D model search tool that uses a visual word based search scheme to efficiently retrieve 3D models from large databases using individual query images. Together the 3D model search and the pose estimation represent a highly scalable and efficient city-scale localization system. The performance of the 3D model search and pose estimation is demonstrated on urban image data.
{"title":"3D model search and pose estimation from single images using VIP features","authors":"Changchang Wu, F. Fraundorfer, Jan-Michael Frahm, M. Pollefeys","doi":"10.1109/CVPRW.2008.4563037","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563037","url":null,"abstract":"This paper describes a method to efficiently search for 3D models in a city-scale database and to compute the camera poses from single query images. The proposed method matches SIFT features (from a single image) to viewpoint invariant patches (VIP) from a 3D model by warping the SIFT features approximately into the orthographic frame of the VIP features. This significantly increases the number of feature correspondences which results in a reliable and robust pose estimation. We also present a 3D model search tool that uses a visual word based search scheme to efficiently retrieve 3D models from large databases using individual query images. Together the 3D model search and the pose estimation represent a highly scalable and efficient city-scale localization system. The performance of the 3D model search and pose estimation is demonstrated on urban image data.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122722622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563162
S. Soutschek, J. Penne, J. Hornegger, J. Kornhuber
For a lot of applications, and particularly for medical intra-operative applications, the exploration of and navigation through 3-D image data provided by sensors like ToF (time-of-flight) cameras, MUSTOF (multisensor-time-of-flight) endoscopes or CT (computed tomography) [8], requires a user-interface which avoids physical interaction with an input device. Thus, we process a touchless user-interface based on gestures classified by the data provided by a ToF camera. Reasonable and necessary user interactions are described. For those interactions a suitable set of gestures is introduced. A user-interface is then proposed, which interprets the current gesture and performs the assigned functionality. For evaluating the quality of the developed user-interface we considered the aspects of classification rate, real-time applicability, usability, intuitiveness and training time. The results of our evaluation show that our system, which provides a classification rate of 94.3% at a framerate of 11 frames per second, satisfactorily addresses all these quality requirements.
{"title":"3-D gesture-based scene navigation in medical imaging applications using Time-of-Flight cameras","authors":"S. Soutschek, J. Penne, J. Hornegger, J. Kornhuber","doi":"10.1109/CVPRW.2008.4563162","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563162","url":null,"abstract":"For a lot of applications, and particularly for medical intra-operative applications, the exploration of and navigation through 3-D image data provided by sensors like ToF (time-of-flight) cameras, MUSTOF (multisensor-time-of-flight) endoscopes or CT (computed tomography) [8], requires a user-interface which avoids physical interaction with an input device. Thus, we process a touchless user-interface based on gestures classified by the data provided by a ToF camera. Reasonable and necessary user interactions are described. For those interactions a suitable set of gestures is introduced. A user-interface is then proposed, which interprets the current gesture and performs the assigned functionality. For evaluating the quality of the developed user-interface we considered the aspects of classification rate, real-time applicability, usability, intuitiveness and training time. The results of our evaluation show that our system, which provides a classification rate of 94.3% at a framerate of 11 frames per second, satisfactorily addresses all these quality requirements.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114275672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4562985
KA Navas, M. Aravind, M. Sasikumar, Assitant
Objective quality assessment has been widely used in image processing for decades and many researchers have been studying the objective quality assessment method based on human visual system (HVS). This paper presents a new measure which denotes the perceptual degradation produced in an image using certain subjectively evaluated weighing functions. Experimental analysis when carried out on different sets of images for different levels of data hiding and under different attacks shows that this new measure shows a high degree of acceptance with the subjective analysis measure.
{"title":"A novel quality measure for information hiding in images","authors":"KA Navas, M. Aravind, M. Sasikumar, Assitant","doi":"10.1109/CVPRW.2008.4562985","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562985","url":null,"abstract":"Objective quality assessment has been widely used in image processing for decades and many researchers have been studying the objective quality assessment method based on human visual system (HVS). This paper presents a new measure which denotes the perceptual degradation produced in an image using certain subjectively evaluated weighing functions. Experimental analysis when carried out on different sets of images for different levels of data hiding and under different attacks shows that this new measure shows a high degree of acceptance with the subjective analysis measure.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114499918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4562951
S. Divvala, Alexei A. Efros, M. Hebert
We describe a preliminary investigation of utilising large amounts of unlabelled image data to help in the estimation of rough scene layout. We take the single-view geometry estimation system of Hoiem et al (2207) as the baseline and see if it is possible to improve its performance by considering a set of similar scenes gathered from the Web. The two complimentary approaches being considered are 1) improving surface classification by using average geometry estimated from the matches, and 2) improving surface segmentation by injecting segments generated from the average of the matched images. The system is evaluated using the labelled 300-image dataset of Hoiem et al. and shows promising results.
{"title":"Can similar scenes help surface layout estimation?","authors":"S. Divvala, Alexei A. Efros, M. Hebert","doi":"10.1109/CVPRW.2008.4562951","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4562951","url":null,"abstract":"We describe a preliminary investigation of utilising large amounts of unlabelled image data to help in the estimation of rough scene layout. We take the single-view geometry estimation system of Hoiem et al (2207) as the baseline and see if it is possible to improve its performance by considering a set of similar scenes gathered from the Web. The two complimentary approaches being considered are 1) improving surface classification by using average geometry estimated from the matches, and 2) improving surface segmentation by injecting segments generated from the average of the matched images. The system is evaluated using the labelled 300-image dataset of Hoiem et al. and shows promising results.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116831449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-23DOI: 10.1109/CVPRW.2008.4563004
Jeroen Hermans, J. Bellemans, F. Maes, D. Vandermeulen, P. Suetens
Registration of 3D knee implant components to single-plane X-ray image sequences provides insight into implanted knee kinematics. In this paper a maximum likelihood approach is proposed to align the pose-related occluding contour of an object with edge segments extracted from a single-plane X-ray image. This leads to an expectation maximization algorithm which simultaneously determines the objectpsilas pose, estimates point correspondences and rejects outlier points from the registration process. Considering (nearly) planar-symmetrical objects, the method is extended in order to simultaneously estimate two symmetrical object poses which both align the corresponding occluding contours with 2D edge information. The algorithmpsilas capacity to generate accurate pose estimates and the necessity of determining both symmetrical poses when aligning (nearly) planar-symmetrical objects will be demonstrated in the context of automated registration of knee implant components to simulated and real single-plane X-ray images.
{"title":"A statistical framework for the registration of 3D knee implant components to single-plane X-ray images","authors":"Jeroen Hermans, J. Bellemans, F. Maes, D. Vandermeulen, P. Suetens","doi":"10.1109/CVPRW.2008.4563004","DOIUrl":"https://doi.org/10.1109/CVPRW.2008.4563004","url":null,"abstract":"Registration of 3D knee implant components to single-plane X-ray image sequences provides insight into implanted knee kinematics. In this paper a maximum likelihood approach is proposed to align the pose-related occluding contour of an object with edge segments extracted from a single-plane X-ray image. This leads to an expectation maximization algorithm which simultaneously determines the objectpsilas pose, estimates point correspondences and rejects outlier points from the registration process. Considering (nearly) planar-symmetrical objects, the method is extended in order to simultaneously estimate two symmetrical object poses which both align the corresponding occluding contours with 2D edge information. The algorithmpsilas capacity to generate accurate pose estimates and the necessity of determining both symmetrical poses when aligning (nearly) planar-symmetrical objects will be demonstrated in the context of automated registration of knee implant components to simulated and real single-plane X-ray images.","PeriodicalId":102206,"journal":{"name":"2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117078202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}