Parsing 2D radiographs into anatomical regions is a challenging task with many applications. In the clinic, scans routinely include anterior-posterior (AP) and lateral (LAT) view radiographs. Since these orthogonal views provide complementary anatomic information, an integrated analysis can afford the greatest localization accuracy. To solve this integration we propose automatic landmark candidate detection, pruned by a learned geometric consensus detector model and refined by fitting a hierarchical active appearance organ model (H-AAM). Our main contribution is twofold. First, we propose a probabilistic joint consensus detection model which learns how landmarks in either or both views predict landmark locations in a given view. Second, we refine landmarks by fitting a joint H-AAM that learns how landmark arrangement and image appearance can help predict across views. This increases accuracy and robustness to anatomic variation. All steps require just seconds to compute and compared to processing the scouts separately, joint processing reduces mean landmark distance error from 27.3 mm to 15.7 mm in LAT view and from 12.7 mm to 11.2 mm in the AP view. The errors are comparable to human expert inter-observer variability and suitable for clinical applications such as personalized scan planning for dose reduction. We assess our method using a database of scout CT scans from 93 subjects with widely varying pathology.