Syntactic features such as dependency and constituency are well known to improve Semantic Role Labeling (SRL), yet existing syntax-aware SRL models remain largely opaque. Related explainable NLP methods focus primarily on token-level representations and do not quantify how such relations contribute to individual SRL predictions. This limitation is particularly concerning in high-stakes domains such as biomedicine, where interpretability is essential for building trust in NLP-assisted analysis. In this work, we propose a post-hoc explanation framework that targets pairwise syntactic relations and provides a quantitative characterization of their influence on SRL predictions, measuring both the magnitude and polarity of their effects on argument span predictions. The framework employs a controlled representation perturbation that allows relation-specific analysis of importance, without modifying tokens or altering the model architecture. We further address the challenge of evaluating explanation quality without human-annotated importance labels by introducing a set of model-grounded diagnostic tests. These analyses assess whether the explanation scores systematically align with the model’s behavioral responses to syntactic perturbations, rather than relying on correlation with external judgments. To facilitate controlled experimentation, we also construct a lightweight syntax-aware SRL model via confidence-weighted task-vector merging, which avoids runtime parsing and annotated syntax at inference time. While auxiliary to the explanation framework, this model achieves competitive performance on biomedical text with reduced computational cost. Experiments on biomedical- and general-domain data demonstrate consistent behavioral patterns across domains, supporting the utility of the proposed explanations for analyzing how SRL models leverage syntactic structure.
扫码关注我们
求助内容:
应助结果提醒方式:
