Automated keyword extraction is widely used for tasks like classification and summarization, but generic methods often fail to address domain-specific requirements. In education, texts are designed to help students grasp and retain key concepts needed for exercises and resolve questions. Despite the variety of existing keyword extraction algorithms, none are specifically adapted to the unique structure and purpose of educational materials like textbooks or lecture notes.Supervised methods have demonstrated their effectiveness in various domains through advanced techniques like contextual embeddings and domain-specific fine-tuning, Our study proposes a novel solution leveraging pretrained transformer models, specifically BERT, to adapt to the structure of educational materials for effective keyword extraction. Our research demonstrates that by fine-tuning BERT models to the specific characteristics of educational texts, we can achieve more accurate and relevant keyword extraction. YodkW, our adapted model, outperforms traditional algorithms in identifying the key concepts that are essential for educational purposes. Performance is quantified using the F1 score relative to text books key terms list, Preliminary results demonstrate that our approach can improve the identification of key concepts pertinent to student understanding and facilitate the automatic generation of test questions.