Assessment of psychological constructs, such as the Big Five personality traits, has predominantly relied on standardized rating scales. While these scales have advantages, we propose that descriptive word-based responses analyzed with natural language processing (NLP) offer a promising alternative for assessing personality traits. We asked participants (N = 663) to describe either their own personality or a person high in one of the Big Five traits using five words. These responses were then analyzed using large language models, namely BERT and GPT-4, which are known for their high-performance NLP capabilities. The primary aim was to assess the validity of word-based responses analyzed by NLP in comparison to the IPIP-NEO-30 rating scale, a commonly used tool for measuring the Big Five traits. Results showed that descriptive word responses had an average prediction accuracy of up to 10 % higher than the rating scale in categorizing the Big Five traits. Additionally, semantic measures showed higher inter-rater reliability, and observer convergence was greater in assessments of others than in self-reports. These findings suggest that descriptive word-based responses may capture more observable and broad aspects of personality compared to traditional rating scales.