Qi Song, Albert Montillo, Roshni Bhagalia, V Srikrishnan
{"title":"Organ Localization Using Joint AP/LAT View Landmark Consensus Detection and Hierarchical Active Appearance Models.","authors":"Qi Song, Albert Montillo, Roshni Bhagalia, V Srikrishnan","doi":"10.1007/978-3-319-05530-5_14","DOIUrl":null,"url":null,"abstract":"<p><p>Parsing 2D radiographs into anatomical regions is a challenging task with many applications. In the clinic, scans routinely include anterior-posterior (AP) and lateral (LAT) view radiographs. Since these orthogonal views provide complementary anatomic information, an integrated analysis can afford the greatest localization accuracy. To solve this integration we propose automatic landmark candidate detection, pruned by a learned geometric consensus detector model and refined by fitting a hierarchical active appearance organ model (H-AAM). Our main contribution is twofold. First, we propose a probabilistic joint consensus detection model which learns how landmarks in <i>either or both</i> views predict landmark locations in a given view. Second, we refine landmarks by fitting a joint H-AAM that learns how landmark arrangement and image appearance can help predict across views. This increases accuracy and robustness to anatomic variation. All steps require just seconds to compute and compared to processing the scouts separately, joint processing reduces mean landmark distance error from 27.3 mm to 15.7 mm in LAT view and from 12.7 mm to 11.2 mm in the AP view. The errors are comparable to human expert inter-observer variability and suitable for clinical applications such as personalized scan planning for dose reduction. We assess our method using a database of scout CT scans from 93 subjects with widely varying pathology.</p>","PeriodicalId":92822,"journal":{"name":"Medical computer vision : large data in medical imaging : third international MICCAI workshop, MCV 2013, Nagoya, Japan, September 26, 2013 : revised selected papers. MCV (Workshop) (3rd : 2013 : Nagoya-shi, Japan)","volume":"8331 ","pages":"138-147"},"PeriodicalIF":0.0000,"publicationDate":"2014-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6947663/pdf/nihms-1063776.pdf","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical computer vision : large data in medical imaging : third international MICCAI workshop, MCV 2013, Nagoya, Japan, September 26, 2013 : revised selected papers. MCV (Workshop) (3rd : 2013 : Nagoya-shi, Japan)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-319-05530-5_14","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2014/4/1 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Parsing 2D radiographs into anatomical regions is a challenging task with many applications. In the clinic, scans routinely include anterior-posterior (AP) and lateral (LAT) view radiographs. Since these orthogonal views provide complementary anatomic information, an integrated analysis can afford the greatest localization accuracy. To solve this integration we propose automatic landmark candidate detection, pruned by a learned geometric consensus detector model and refined by fitting a hierarchical active appearance organ model (H-AAM). Our main contribution is twofold. First, we propose a probabilistic joint consensus detection model which learns how landmarks in either or both views predict landmark locations in a given view. Second, we refine landmarks by fitting a joint H-AAM that learns how landmark arrangement and image appearance can help predict across views. This increases accuracy and robustness to anatomic variation. All steps require just seconds to compute and compared to processing the scouts separately, joint processing reduces mean landmark distance error from 27.3 mm to 15.7 mm in LAT view and from 12.7 mm to 11.2 mm in the AP view. The errors are comparable to human expert inter-observer variability and suitable for clinical applications such as personalized scan planning for dose reduction. We assess our method using a database of scout CT scans from 93 subjects with widely varying pathology.