Accurate histopathological evaluation of pancreatic ductal adenocarcinoma (PDAC), including primary tumor lesions and lymph node metastases, is critical for prognostic evaluation and personalized therapeutic strategies. Distinct from other solid tumors, PDAC presents unique diagnostic challenges due to its extensive desmoplasia, unclear tumor boundary, and difficulty in differentiation from chronic pancreatitis. These characteristics not only complicate pathological diagnosis but also hinder the acquisition of pixel-level annotations required for training computational pathology models. Here, we present PANseg, a multi-scale weakly supervised deep learning framework for PDAC segmentation, trained and tested on 368 whole-slide images (WSIs) from 192 patients across two independent centers. Utilizing only image-level labels (2,048×2,048 pixels), PANseg achieved comparable performance to fully supervised baseline (FSB) across the internal test set 1 (17 patients/58 WSIs; PANseg AUROC: 0.969 vs FSB AUROC: 0.968), internal test set 2 (40 patients/44 WSIs; PANseg AUROC: 0.991 vs FSB AUROC: 0.980) and external test set (20 patients/20 WSIs; PANseg AUROC: 0.950 vs FSB AUROC: 0.958). Moreover, the model demonstrated considerable generalizability with previously unseen sample types, attaining AUROCs of 0.878 on fresh-frozen specimens (20 patients/20 WSIs) and 0.821 on biopsy sections (20 patients/20 WSIs). In lymph node metastasis detection, PANseg augmented diagnostic accuracy of six pathologists from 0.888 to 0.961, while reducing average diagnostic time by 32.6% (72.0 vs 48.5 minutes). This study demonstrates that our weakly supervised model can achieve expert-level segmentation performance and substantially reducing annotation burden. The clinical implementation of PANseg holds great potential in enhancing diagnostic precision and workflow efficiency in routine histopathological assessment of PDAC.