In infrastructure construction, engineering drawings combine graphic and textual information, with text playing a critical role in retrieving and measuring the similarity of these drawings in practical applications. However, existing research primarily focuses on graphics, neglecting the extraction and semantic representation of text. Existing Optical Character Recognition (OCR)-based methods face challenges in clustering text into coherent semantic modules, frequently dispersing related text across different regions. Therefore, this paper proposes a deep learning framework for the semantic extraction of text from engineering drawings. By integrating textual, positional, and image features, this framework enables semantic extraction and represents engineering drawings as knowledge graphs. An interactive attention-based approach is employed for associative retrieval of engineering drawings via subgraph matching. Evaluation on datasets from a transportation design institute and public sources demonstrates the framework's effectiveness in both semantic extraction and relational reasoning.