This study, conducted at a fully online Spanish higher education institution, documents the validation of a bespoke quality assessment tool designed to measure the susceptibility of formative assignments to AI academic misconduct. The research explored the impact of Generative AI (GenAI) technologies in the Humanities. The framework study consisted of four stages: the design of a rubric (Stages 1–3) and its large-scale validation (Stage 4) through a field test in the Translation and Interpreting Studies Bachelor's Degree. This paper presents stage 4 results where lecturers (n = 29), using a bottom-up approach, voluntarily applied the tool to their teaching contexts and analysed assignments (n = 151) using the rubric, revealing significant vulnerabilities in assessments easily converted by GenAI or lacking originality and collaboration. The findings guided AI-integrated assessment designs that encourage complexity, creativity, and ethical engagement. This study outlines effective GenAI practices in assessment design and emphasises innovative methods for academic integrity in higher education.