Pub Date : 2011-01-01Epub Date: 2011-07-25DOI: 10.3109/10929088.2011.597566
Thiago Oliveira-Santos, Bernd Klaeser, Thilo Weitzel, Thomas Krause, Lutz-Peter Nolte, Matthias Peterhans, Stefan Weber
Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.
{"title":"A navigation system for percutaneous needle interventions based on PET/CT images: design, workflow and error analysis of soft tissue and bone punctures.","authors":"Thiago Oliveira-Santos, Bernd Klaeser, Thilo Weitzel, Thomas Krause, Lutz-Peter Nolte, Matthias Peterhans, Stefan Weber","doi":"10.3109/10929088.2011.597566","DOIUrl":"https://doi.org/10.3109/10929088.2011.597566","url":null,"abstract":"<p><p>Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.</p>","PeriodicalId":50644,"journal":{"name":"Computer Aided Surgery","volume":"16 5","pages":"203-19"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3109/10929088.2011.597566","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29884708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-01DOI: 10.3109/10929088.2011.621092
Philip Catala-Lehnen, Jakob V Nüchtern, Daniel Briem, Thorsten Klink, Johannes M Rueger, Wolfgang Lehmann
Navigation in hand surgery is still in the process of development. Initial studies have demonstrated the feasibility of 2D and 3D navigation for the palmar approach in scaphoid fractures, but a comparison of the possibilities of 2D and 3D navigation for the dorsal approach is still lacking. The aim of the present work was to test navigation for the dorsal approach in the scaphoid using cadaver bones. After development of a special radiolucent resting splint for the dorsal approach, we performed 2D- and 3D-navigated scaphoid osteosynthesis in 12 fresh-frozen cadaver forearms using a headless compression screw (Synthes). The operation time, radiation time, number of trials for screw insertion, and screw positions were analyzed. In six 2D-navigated screw osteosyntheses, we found two false positions with an average radiation time of 5 ± 2 seconds. Using 3D navigation, we detected one false position. A false position indicates divergence from the ideal line of the axis of the scaphoid but without penetration of the cortex. The initial scan clearly increased overall radiation time in the 3D-navigated group, and for both navigation procedures operating time was longer than in our clinical experience without navigation. Nonetheless, 2D and 3D navigation for non-dislocated scaphoid fractures is feasible, and navigation might reduce the risk of choosing an incorrect screw length, thereby possibly avoiding injury to the subtending cortex. The 3D navigation is more difficult to interpret than 2D fluoroscopic navigation but shows greater precision. Overall, navigation is costly, and the moderate advantages it offers for osteosynthesis of scaphoid fractures must be considered critically in comparisons with conventional operating techniques.
{"title":"Comparison of 2D and 3D navigation techniques for percutaneous screw insertion into the scaphoid: results of an experimental cadaver study.","authors":"Philip Catala-Lehnen, Jakob V Nüchtern, Daniel Briem, Thorsten Klink, Johannes M Rueger, Wolfgang Lehmann","doi":"10.3109/10929088.2011.621092","DOIUrl":"https://doi.org/10.3109/10929088.2011.621092","url":null,"abstract":"<p><p>Navigation in hand surgery is still in the process of development. Initial studies have demonstrated the feasibility of 2D and 3D navigation for the palmar approach in scaphoid fractures, but a comparison of the possibilities of 2D and 3D navigation for the dorsal approach is still lacking. The aim of the present work was to test navigation for the dorsal approach in the scaphoid using cadaver bones. After development of a special radiolucent resting splint for the dorsal approach, we performed 2D- and 3D-navigated scaphoid osteosynthesis in 12 fresh-frozen cadaver forearms using a headless compression screw (Synthes). The operation time, radiation time, number of trials for screw insertion, and screw positions were analyzed. In six 2D-navigated screw osteosyntheses, we found two false positions with an average radiation time of 5 ± 2 seconds. Using 3D navigation, we detected one false position. A false position indicates divergence from the ideal line of the axis of the scaphoid but without penetration of the cortex. The initial scan clearly increased overall radiation time in the 3D-navigated group, and for both navigation procedures operating time was longer than in our clinical experience without navigation. Nonetheless, 2D and 3D navigation for non-dislocated scaphoid fractures is feasible, and navigation might reduce the risk of choosing an incorrect screw length, thereby possibly avoiding injury to the subtending cortex. The 3D navigation is more difficult to interpret than 2D fluoroscopic navigation but shows greater precision. Overall, navigation is costly, and the moderate advantages it offers for osteosynthesis of scaphoid fractures must be considered critically in comparisons with conventional operating techniques.</p>","PeriodicalId":50644,"journal":{"name":"Computer Aided Surgery","volume":"16 6","pages":"280-7"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3109/10929088.2011.621092","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30057927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study was conducted to demonstrate the feasibility of three-dimensional (3D) reconstruction of extremity tumor regions for patient-specific preoperative assessment and planning by using CT and MRI image data fusion. The CT and MRI image data of five patients with solid tumors were fused to construct 3D models of the respective tumor regions. The reconstruction time and image fusion accuracy were measured, and the tumor features and spatial relationships were analyzed to enable subject-specific preoperative assessment and planning as guidance for tumor resection. The 3D models of the tumor regions, including skin, fat, bones, tumor, muscles, internal organs, nerves and vessels, were created with a mean reconstruction time of 103 minutes and fusion accuracy of 2.02 mm. The 3D reconstruction clearly delineated the tumor features, and provided a vivid view of spatial relationships within the tumor region. Based on this intuitional information, the subject-specific preoperative assessment and planning were easily accomplished, and all tumor resections were performed as planned preoperatively. Three-dimensional reconstruction using CT/MRI image fusion is feasible for accurate reproduction of the complex anatomy of the tumor region with high efficiency, and can help surgeons improve the preoperative assessment and planning for effective removal of tumors.
{"title":"Three-dimensional reconstruction of extremity tumor regions by CT and MRI image data fusion for subject-specific preoperative assessment and planning.","authors":"Yuefu Dong, Yinghai Dong, Guanghong Hu, Qingrong Xu","doi":"10.3109/10929088.2011.602721","DOIUrl":"https://doi.org/10.3109/10929088.2011.602721","url":null,"abstract":"<p><p>This study was conducted to demonstrate the feasibility of three-dimensional (3D) reconstruction of extremity tumor regions for patient-specific preoperative assessment and planning by using CT and MRI image data fusion. The CT and MRI image data of five patients with solid tumors were fused to construct 3D models of the respective tumor regions. The reconstruction time and image fusion accuracy were measured, and the tumor features and spatial relationships were analyzed to enable subject-specific preoperative assessment and planning as guidance for tumor resection. The 3D models of the tumor regions, including skin, fat, bones, tumor, muscles, internal organs, nerves and vessels, were created with a mean reconstruction time of 103 minutes and fusion accuracy of 2.02 mm. The 3D reconstruction clearly delineated the tumor features, and provided a vivid view of spatial relationships within the tumor region. Based on this intuitional information, the subject-specific preoperative assessment and planning were easily accomplished, and all tumor resections were performed as planned preoperatively. Three-dimensional reconstruction using CT/MRI image fusion is feasible for accurate reproduction of the complex anatomy of the tumor region with high efficiency, and can help surgeons improve the preoperative assessment and planning for effective removal of tumors.</p>","PeriodicalId":50644,"journal":{"name":"Computer Aided Surgery","volume":"16 5","pages":"220-33"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3109/10929088.2011.602721","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29904335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-01Epub Date: 2011-06-13DOI: 10.3109/10929088.2011.585805
A Wang, S M Mirsattari, A G Parrent, T M Peters
Objective: During epilepsy surgery it is important for the surgeon to correlate the preoperative cortical morphology (from preoperative images) with the intraoperative environment. Augmented Reality (AR) provides a solution for combining the real environment with virtual models. However, AR usually requires the use of specialized displays, and its effectiveness in the surgery still needs to be evaluated. The objective of this research was to develop an alternative approach to provide enhanced visualization by fusing a direct (photographic) view of the surgical field with the 3D patient model during image guided epilepsy surgery.
Materials and methods: We correlated the preoperative plan with the intraoperative surgical scene, first by a manual landmark-based registration and then by an intensity-based perspective 3D-2D registration for camera pose estimation. The 2D photographic image was then texture-mapped onto the 3D preoperative model using the solved camera pose. In the proposed method, we employ direct volume rendering to obtain a perspective view of the brain image using GPU-accelerated ray-casting. The algorithm was validated by a phantom study and also in the clinical environment with a neuronavigation system.
Results: In the phantom experiment, the 3D Mean Registration Error (MRE) was 2.43 ± 0.32 mm with a success rate of 100%. In the clinical experiment, the 3D MRE was 5.15 ± 0.49 mm with 2D in-plane error of 3.30 ± 1.41 mm. A clinical application of our fusion method for enhanced and augmented visualization for integrated image and functional guidance during neurosurgery is also presented.
Conclusions: This paper presents an alternative approach to a sophisticated AR environment for assisting in epilepsy surgery, whereby a real intraoperative scene is mapped onto the surface model of the brain. In contrast to the AR approach, this method needs no specialized display equipment. Moreover, it requires minimal changes to existing systems and workflow, and is therefore well suited to the OR environment. In the phantom and in vivo clinical experiments, we demonstrate that the fusion method can achieve a level of accuracy sufficient for the requirements of epilepsy surgery.
{"title":"Fusion and visualization of intraoperative cortical images with preoperative models for epilepsy surgical planning and guidance.","authors":"A Wang, S M Mirsattari, A G Parrent, T M Peters","doi":"10.3109/10929088.2011.585805","DOIUrl":"https://doi.org/10.3109/10929088.2011.585805","url":null,"abstract":"<p><strong>Objective: </strong>During epilepsy surgery it is important for the surgeon to correlate the preoperative cortical morphology (from preoperative images) with the intraoperative environment. Augmented Reality (AR) provides a solution for combining the real environment with virtual models. However, AR usually requires the use of specialized displays, and its effectiveness in the surgery still needs to be evaluated. The objective of this research was to develop an alternative approach to provide enhanced visualization by fusing a direct (photographic) view of the surgical field with the 3D patient model during image guided epilepsy surgery.</p><p><strong>Materials and methods: </strong>We correlated the preoperative plan with the intraoperative surgical scene, first by a manual landmark-based registration and then by an intensity-based perspective 3D-2D registration for camera pose estimation. The 2D photographic image was then texture-mapped onto the 3D preoperative model using the solved camera pose. In the proposed method, we employ direct volume rendering to obtain a perspective view of the brain image using GPU-accelerated ray-casting. The algorithm was validated by a phantom study and also in the clinical environment with a neuronavigation system.</p><p><strong>Results: </strong>In the phantom experiment, the 3D Mean Registration Error (MRE) was 2.43 ± 0.32 mm with a success rate of 100%. In the clinical experiment, the 3D MRE was 5.15 ± 0.49 mm with 2D in-plane error of 3.30 ± 1.41 mm. A clinical application of our fusion method for enhanced and augmented visualization for integrated image and functional guidance during neurosurgery is also presented.</p><p><strong>Conclusions: </strong>This paper presents an alternative approach to a sophisticated AR environment for assisting in epilepsy surgery, whereby a real intraoperative scene is mapped onto the surface model of the brain. In contrast to the AR approach, this method needs no specialized display equipment. Moreover, it requires minimal changes to existing systems and workflow, and is therefore well suited to the OR environment. In the phantom and in vivo clinical experiments, we demonstrate that the fusion method can achieve a level of accuracy sufficient for the requirements of epilepsy surgery.</p>","PeriodicalId":50644,"journal":{"name":"Computer Aided Surgery","volume":"16 4","pages":"149-60"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3109/10929088.2011.585805","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29932892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-01DOI: 10.3109/10929088.2010.541620
Iqbal Singh
To assess the current state of robot-assisted urological surgery, the literature concerning surgical robotic systems, surgical telemanipulators and laparoscopic systems was reviewed. Aspects of these systems pertaining to maneuverability were evaluated, with a view to quantifying their stability and locomotive properties and thereby determining their suitability for use in assisted laparoscopic procedures, particularly robot-assisted laparoscopic urological surgery. The degree of maneuverability and versatility of a robotic system determine its utility in the operating room, and the newer-generation surgical robotic systems have been found to possess a higher degree of maneuverability than older class 1 and class 2 systems. It is now clearly established that robots have an important place in the urologist's armamentarium for minimally invasive surgery; however, the long-term outcomes of several urological procedures (other than robot-assisted radical prostatectomy) performed with the da Vinci surgical robotic system have yet to be evaluated.
{"title":"Robotics in urological surgery: review of current status and maneuverability, and comparison of robot-assisted and traditional laparoscopy.","authors":"Iqbal Singh","doi":"10.3109/10929088.2010.541620","DOIUrl":"https://doi.org/10.3109/10929088.2010.541620","url":null,"abstract":"<p><p>To assess the current state of robot-assisted urological surgery, the literature concerning surgical robotic systems, surgical telemanipulators and laparoscopic systems was reviewed. Aspects of these systems pertaining to maneuverability were evaluated, with a view to quantifying their stability and locomotive properties and thereby determining their suitability for use in assisted laparoscopic procedures, particularly robot-assisted laparoscopic urological surgery. The degree of maneuverability and versatility of a robotic system determine its utility in the operating room, and the newer-generation surgical robotic systems have been found to possess a higher degree of maneuverability than older class 1 and class 2 systems. It is now clearly established that robots have an important place in the urologist's armamentarium for minimally invasive surgery; however, the long-term outcomes of several urological procedures (other than robot-assisted radical prostatectomy) performed with the da Vinci surgical robotic system have yet to be evaluated.</p>","PeriodicalId":50644,"journal":{"name":"Computer Aided Surgery","volume":"16 1","pages":"38-45"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3109/10929088.2010.541620","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29569603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-01DOI: 10.3109/10929088.2010.542694
M C Müller, P Belei, M De La Fuente, M Strake, O Weber, C Burger, K Radermacher, D C Wirtz
Accurate placement of cannulated screws is essential to ensure fixation of medial femoral neck fractures. The conventional technique may require multiple guide wire passes, and relies heavily on fluoroscopy. A computer-assisted planning and navigation system based on 2D fluoroscopy for guide wire placement in the femoral neck has been developed to improve screw placement. The planning process was supported by a tool that enables a virtual radiation-free preview of X-ray images. This is called "zero-dose C-arm navigation". For the evaluation of the system, six formalin-fixed cadaveric full-body specimens (12 femurs) were used. The evaluation demonstrated the feasibility of fluoroscopically navigated guide wire and implant placement. Use of the novel system resulted in a significant reduction in the number of fluoroscopic images and drilling attempts while achieving optimized accuracy by attaining better screw parallelism and enlarged neck-width coverage. Operation time was significantly longer in the navigation assisted group. The system has yielded promising initial results; however, additional studies using fractured bone models and with extension of the navigation process to track two bone fragments must be performed before integration of this navigation system into the clinical workflow is possible, and these studies should focus on reducing the operation time.
{"title":"Evaluation of a fluoroscopy-based navigation system enabling a virtual radiation-free preview of X-ray images for placement of cannulated hip screws. A cadaver study.","authors":"M C Müller, P Belei, M De La Fuente, M Strake, O Weber, C Burger, K Radermacher, D C Wirtz","doi":"10.3109/10929088.2010.542694","DOIUrl":"https://doi.org/10.3109/10929088.2010.542694","url":null,"abstract":"<p><p>Accurate placement of cannulated screws is essential to ensure fixation of medial femoral neck fractures. The conventional technique may require multiple guide wire passes, and relies heavily on fluoroscopy. A computer-assisted planning and navigation system based on 2D fluoroscopy for guide wire placement in the femoral neck has been developed to improve screw placement. The planning process was supported by a tool that enables a virtual radiation-free preview of X-ray images. This is called \"zero-dose C-arm navigation\". For the evaluation of the system, six formalin-fixed cadaveric full-body specimens (12 femurs) were used. The evaluation demonstrated the feasibility of fluoroscopically navigated guide wire and implant placement. Use of the novel system resulted in a significant reduction in the number of fluoroscopic images and drilling attempts while achieving optimized accuracy by attaining better screw parallelism and enlarged neck-width coverage. Operation time was significantly longer in the navigation assisted group. The system has yielded promising initial results; however, additional studies using fractured bone models and with extension of the navigation process to track two bone fragments must be performed before integration of this navigation system into the clinical workflow is possible, and these studies should focus on reducing the operation time.</p>","PeriodicalId":50644,"journal":{"name":"Computer Aided Surgery","volume":"16 1","pages":"22-31"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3109/10929088.2010.542694","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29569602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-01Epub Date: 2010-12-08DOI: 10.3109/10929088.2010.535317
Ho-Yeon Lee, Sang-Ho Lee, Hyeong Kweon Son, Jong Han Na, June Ho Lee, Oon Ki Baek, Chan Shik Shim
Objective: Multilevel Oblique Corpectomy (MOC) is an emerging technique for surgical treatment of multi-segmental cervical spondylotic myelopathy (CSM) featuring extensive ossification of the posterior longitudinal ligament (OPLL). However, the use of an oblique drilling plane is unfamiliar to most surgeons and there is no anatomical landmark present on the posterior portion of the vertebral body. To overcome these difficulties, the authors used intraoperative C-arm-based image guided navigation (IGN), and this study was conducted to evaluate the efficacy of IGN in MOC.
Methods: Following the introduction of IGN for MOC, 24 patients underwent MOC procedures at our institution. Two patients who had undergone previous cervical operations were excluded from the present study. Of the remaining 22 patients, 11 underwent MOC with IGN, and 11 underwent MOC without IGN support. The completeness of MOC (CMOC) is measured as the sum of the bilateral remaining posterior body minus the remaining approach-side anterior body in millimeters at the most compressive level. For each patient, the preoperative Japanese Orthopaedic Association Score (JOAS) and postoperative 5th day JOAS were collected as well as several other perioperative parameters.
Results: The mean CMOC was 0.89 mm for the IGN group and 5.9 mm for the control group. The mean change in JOAS was 5.58 for the IGN group and 3.34 for the control group at 1-year follow-up. In the control group, two patients underwent re-exploration due to remaining OPLL. Despite the intraoperative IGN set-up time, the mean operation time for the IGN group was shorter than that for the control group (248 min versus 259 min). Mean treated levels were 3.55 for the IGN group and 3.36 for the control group.
Conclusion: Through the use of image guided navigation, it was possible to accomplish faster and more complete MOC.
目的:多节段斜椎体切除术(MOC)是一种新兴的手术治疗以后纵韧带(OPLL)广泛骨化为特征的多节段脊髓型颈椎病(CSM)的技术。然而,斜钻孔平面的使用对大多数外科医生来说是陌生的,并且在椎体后部没有解剖标志。为了克服这些困难,作者采用术中基于c臂的图像引导导航(IGN),本研究评估IGN在MOC中的疗效。方法:引入IGN治疗MOC后,24例患者在我院接受了MOC手术。两名既往颈椎手术的患者被排除在本研究之外。在其余22例患者中,11例接受了有IGN支持的MOC, 11例接受了没有IGN支持的MOC。MOC的完整性(CMOC)测量为双侧剩余后体减去剩余入路侧前体在最大压缩水平的毫米之和。收集每位患者术前日本骨科协会评分(JOAS)和术后第5天JOAS以及其他围手术期参数。结果:IGN组平均CMOC为0.89 mm,对照组为5.9 mm。在1年的随访中,IGN组JOAS的平均变化为5.58,对照组为3.34。在对照组中,2例患者因OPLL残留而再次探查。尽管术中IGN设置时间较长,但IGN组的平均手术时间短于对照组(248 min vs 259 min)。IGN组的平均治疗水平为3.55,对照组为3.36。结论:通过图像引导导航,可以实现更快、更完整的MOC。
{"title":"Comparison of multilevel oblique corpectomy with and without image guided navigation for multi-segmental cervical spondylotic myelopathy.","authors":"Ho-Yeon Lee, Sang-Ho Lee, Hyeong Kweon Son, Jong Han Na, June Ho Lee, Oon Ki Baek, Chan Shik Shim","doi":"10.3109/10929088.2010.535317","DOIUrl":"https://doi.org/10.3109/10929088.2010.535317","url":null,"abstract":"<p><strong>Objective: </strong>Multilevel Oblique Corpectomy (MOC) is an emerging technique for surgical treatment of multi-segmental cervical spondylotic myelopathy (CSM) featuring extensive ossification of the posterior longitudinal ligament (OPLL). However, the use of an oblique drilling plane is unfamiliar to most surgeons and there is no anatomical landmark present on the posterior portion of the vertebral body. To overcome these difficulties, the authors used intraoperative C-arm-based image guided navigation (IGN), and this study was conducted to evaluate the efficacy of IGN in MOC.</p><p><strong>Methods: </strong>Following the introduction of IGN for MOC, 24 patients underwent MOC procedures at our institution. Two patients who had undergone previous cervical operations were excluded from the present study. Of the remaining 22 patients, 11 underwent MOC with IGN, and 11 underwent MOC without IGN support. The completeness of MOC (CMOC) is measured as the sum of the bilateral remaining posterior body minus the remaining approach-side anterior body in millimeters at the most compressive level. For each patient, the preoperative Japanese Orthopaedic Association Score (JOAS) and postoperative 5th day JOAS were collected as well as several other perioperative parameters.</p><p><strong>Results: </strong>The mean CMOC was 0.89 mm for the IGN group and 5.9 mm for the control group. The mean change in JOAS was 5.58 for the IGN group and 3.34 for the control group at 1-year follow-up. In the control group, two patients underwent re-exploration due to remaining OPLL. Despite the intraoperative IGN set-up time, the mean operation time for the IGN group was shorter than that for the control group (248 min versus 259 min). Mean treated levels were 3.55 for the IGN group and 3.36 for the control group.</p><p><strong>Conclusion: </strong>Through the use of image guided navigation, it was possible to accomplish faster and more complete MOC.</p>","PeriodicalId":50644,"journal":{"name":"Computer Aided Surgery","volume":"16 1","pages":"32-7"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3109/10929088.2010.535317","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29522246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-01Epub Date: 2011-06-01DOI: 10.3109/10929088.2011.579791
Man Ning Wang, Zhi Jian Song
Objective: Surface matching is a relatively new method of spatial registration in neuronavigation. Compared to the traditional point matching method, surface matching does not use fiducial markers that must be fixed to the surface of the head before image scanning, and therefore does not require an image acquisition specifically dedicated for navigation purposes. However, surface matching is not widely used clinically, mainly because there is still insufficient knowledge about its application accuracy. This study aimed to explore the properties of the Target Registration Error (TRE) of surface matching in neuronavigation.
Materials and methods: The surface matching process was simulated in the image space of a neuronavigation system so that the TRE could be calculated at any point in that space. For each registration, two point clouds were generated to represent the surface extracted from preoperative images (PC(image)) and the surface obtained intraoperatively by laser scanning (PC(laser)). The properties of the TRE were studied by performing multiple registrations with PC(laser) point clouds at different positions and generated by adding different types of error.
Results: For each registration, the TRE had a minimal value at a point in the image space, and the iso-valued surface of the TRE was approximately ellipsoid with smaller TRE on the inner surfaces. The position of the point with minimal TRE and the shape of the iso-valued surface were highly random across different registrations, and the surface registration error between the two point clouds was irrelevant to the TRE at a specific point. The overall TRE tended to increase with the increase in errors in PC(laser), and a larger PC(laser) made it less sensitive to these errors. With the introduction of errors in PC(laser), the points with minimal TRE tended to be concentrated in the anterior and inferior part of the head.
Conclusion: The results indicate that the alignment between the two surfaces could not provide reliable information about the registration accuracy at an arbitrary target point. However, according to the spatial distribution of the target registration error of a single registration, enough application accuracy could be guaranteed by proper visual verification after registration. In addition, surface matching tends to achieve high accuracy in the inferior and anterior part of the head, and a relatively large scanning area is preferable.
{"title":"Properties of the target registration error for surface matching in neuronavigation.","authors":"Man Ning Wang, Zhi Jian Song","doi":"10.3109/10929088.2011.579791","DOIUrl":"https://doi.org/10.3109/10929088.2011.579791","url":null,"abstract":"<p><strong>Objective: </strong>Surface matching is a relatively new method of spatial registration in neuronavigation. Compared to the traditional point matching method, surface matching does not use fiducial markers that must be fixed to the surface of the head before image scanning, and therefore does not require an image acquisition specifically dedicated for navigation purposes. However, surface matching is not widely used clinically, mainly because there is still insufficient knowledge about its application accuracy. This study aimed to explore the properties of the Target Registration Error (TRE) of surface matching in neuronavigation.</p><p><strong>Materials and methods: </strong>The surface matching process was simulated in the image space of a neuronavigation system so that the TRE could be calculated at any point in that space. For each registration, two point clouds were generated to represent the surface extracted from preoperative images (PC(image)) and the surface obtained intraoperatively by laser scanning (PC(laser)). The properties of the TRE were studied by performing multiple registrations with PC(laser) point clouds at different positions and generated by adding different types of error.</p><p><strong>Results: </strong>For each registration, the TRE had a minimal value at a point in the image space, and the iso-valued surface of the TRE was approximately ellipsoid with smaller TRE on the inner surfaces. The position of the point with minimal TRE and the shape of the iso-valued surface were highly random across different registrations, and the surface registration error between the two point clouds was irrelevant to the TRE at a specific point. The overall TRE tended to increase with the increase in errors in PC(laser), and a larger PC(laser) made it less sensitive to these errors. With the introduction of errors in PC(laser), the points with minimal TRE tended to be concentrated in the anterior and inferior part of the head.</p><p><strong>Conclusion: </strong>The results indicate that the alignment between the two surfaces could not provide reliable information about the registration accuracy at an arbitrary target point. However, according to the spatial distribution of the target registration error of a single registration, enough application accuracy could be guaranteed by proper visual verification after registration. In addition, surface matching tends to achieve high accuracy in the inferior and anterior part of the head, and a relatively large scanning area is preferable.</p>","PeriodicalId":50644,"journal":{"name":"Computer Aided Surgery","volume":"16 4","pages":"161-9"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3109/10929088.2011.579791","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30209324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-01-01Epub Date: 2011-01-10DOI: 10.3109/10929088.2010.546076
Daniel Briem, Andreas H Ruecker, Joerg Neumann, Matthias Gebauer, Daniel Kendoff, Thorsten Gehrke, Wolfgang Lehmann, Udo Schumacher, Johannes M Rueger, Lars G Grossterlinden
Survival rates for total shoulder arthroplasty are critically dependent on the correct placement of the glenoid component. Especially in osteoarthritis, pathological version of the glenoid occurs frequently and has to be corrected surgically by eccentric reaming of the glenoid brim. The aim of our study was to evaluate whether eccentric reaming of the glenoid can be achieved more accurately by a novel computer assisted technique. Procedures were conducted on 10 paired human cadaveric specimens presenting glenoids with neutral version. To identify the correction potential of the navigated technique compared to the standard procedure, asymmetric reaming of the glenoid to create a version of -10° was defined as the target. In the navigated group, asymmetric reaming was guided by a 3D fluoroscopic technique. Postoperative 3D scans revealed greater accuracy for the eccentric reaming procedure in the navigated group compared to the freehand group, resulting in glenoid version of -9.8 ± 3.8° and -5.1 ± 4.1°, respectively (p < 0.05). Furthermore, deviation from preoperative planning was significantly reduced in the navigated group. These data indicate that our navigated procedure offers an excellent tool for supporting glenoid replacement in TSA.
{"title":"3D fluoroscopic navigated reaming of the glenoid for total shoulder arthroplasty (TSA).","authors":"Daniel Briem, Andreas H Ruecker, Joerg Neumann, Matthias Gebauer, Daniel Kendoff, Thorsten Gehrke, Wolfgang Lehmann, Udo Schumacher, Johannes M Rueger, Lars G Grossterlinden","doi":"10.3109/10929088.2010.546076","DOIUrl":"https://doi.org/10.3109/10929088.2010.546076","url":null,"abstract":"<p><p>Survival rates for total shoulder arthroplasty are critically dependent on the correct placement of the glenoid component. Especially in osteoarthritis, pathological version of the glenoid occurs frequently and has to be corrected surgically by eccentric reaming of the glenoid brim. The aim of our study was to evaluate whether eccentric reaming of the glenoid can be achieved more accurately by a novel computer assisted technique. Procedures were conducted on 10 paired human cadaveric specimens presenting glenoids with neutral version. To identify the correction potential of the navigated technique compared to the standard procedure, asymmetric reaming of the glenoid to create a version of -10° was defined as the target. In the navigated group, asymmetric reaming was guided by a 3D fluoroscopic technique. Postoperative 3D scans revealed greater accuracy for the eccentric reaming procedure in the navigated group compared to the freehand group, resulting in glenoid version of -9.8 ± 3.8° and -5.1 ± 4.1°, respectively (p < 0.05). Furthermore, deviation from preoperative planning was significantly reduced in the navigated group. These data indicate that our navigated procedure offers an excellent tool for supporting glenoid replacement in TSA.</p>","PeriodicalId":50644,"journal":{"name":"Computer Aided Surgery","volume":"16 2","pages":"93-9"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3109/10929088.2010.546076","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"29586174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objective: The aim of this study was to evaluate the accuracy of a novel 3-dimensional (3D) fluoroscopic navigation system using a flat-panel detector-equipped C-arm, focusing on the influence of the distance from the center of fluoroscopic imaging on navigation accuracy.
Materials and methods: A geometric phantom was made using a Styrofoam cube with 25 markers, each consisting of a metal ball 1.5 mm in diameter, fixed in a cross arrangement at 1-cm intervals. Hip joint surgery was simulated using a set of dry pelvic and femoral bones. A total of eight markers were fixed to the acetabulum and proximal femur.
Results: In the geometric phantom study, mean target registration error (TRE) was 0.7 mm (range: 0.1-1.5). The TRE of markers located at 5 cm from the imaging center was significantly higher than the TRE of markers located at 1 and 2 cm. However, the TRE was <1 mm in 90% of the overall trials and <1.5 mm in 100%. In the dry bone study, the mean TRE was 0.9 mm (range: 0.7-1.5) over the acetabulum and 1.0 mm (range: 0.5-1.4) over the femur. No significant difference in TRE was seen between the acetabulum and proximal femur.
Conclusion: The accuracy of this novel 3D fluoroscopic navigation system was considered acceptable for clinical application. A 3D C-arm equipped with a flat-panel detector could increase the feasibility of 3D fluoroscopic navigation by reducing the effects of image distortion on navigation accuracy.
{"title":"Accuracy of a 3D fluoroscopic navigation system using a flat-panel detector-equipped C-arm.","authors":"Masaki Takao, Kentaro Yabuta, Takashi Nishii, Takashi Sakai, Nobuhiko Sugano","doi":"10.3109/10929088.2011.602117","DOIUrl":"https://doi.org/10.3109/10929088.2011.602117","url":null,"abstract":"<p><strong>Objective: </strong>The aim of this study was to evaluate the accuracy of a novel 3-dimensional (3D) fluoroscopic navigation system using a flat-panel detector-equipped C-arm, focusing on the influence of the distance from the center of fluoroscopic imaging on navigation accuracy.</p><p><strong>Materials and methods: </strong>A geometric phantom was made using a Styrofoam cube with 25 markers, each consisting of a metal ball 1.5 mm in diameter, fixed in a cross arrangement at 1-cm intervals. Hip joint surgery was simulated using a set of dry pelvic and femoral bones. A total of eight markers were fixed to the acetabulum and proximal femur.</p><p><strong>Results: </strong>In the geometric phantom study, mean target registration error (TRE) was 0.7 mm (range: 0.1-1.5). The TRE of markers located at 5 cm from the imaging center was significantly higher than the TRE of markers located at 1 and 2 cm. However, the TRE was <1 mm in 90% of the overall trials and <1.5 mm in 100%. In the dry bone study, the mean TRE was 0.9 mm (range: 0.7-1.5) over the acetabulum and 1.0 mm (range: 0.5-1.4) over the femur. No significant difference in TRE was seen between the acetabulum and proximal femur.</p><p><strong>Conclusion: </strong>The accuracy of this novel 3D fluoroscopic navigation system was considered acceptable for clinical application. A 3D C-arm equipped with a flat-panel detector could increase the feasibility of 3D fluoroscopic navigation by reducing the effects of image distortion on navigation accuracy.</p>","PeriodicalId":50644,"journal":{"name":"Computer Aided Surgery","volume":"16 5","pages":"234-9"},"PeriodicalIF":0.0,"publicationDate":"2011-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.3109/10929088.2011.602117","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"30047639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}