Traditional bilingual conceptual representation models assume semantic overlap between translation equivalents varies due to the differing number of shared semantic features. This study proposes an alternative hypothesis that cross-linguistic context consistency drives the semantic overlap variability in bilingual conceptual representation. To test this hypothesis, we introduce an unsupervised Semantic Alignment Model (SAM) that quantifies the contextual congruency of translation and non-translation pairs across two languages. In experiment 1, a translation recognition task demonstrates the non-holistic nature of bilingual conceptual representation and supports the three assumptions of the Revised Hierarchical Model. A subsequent computational simulation experiment validates SAM’s cognitive plausibility: it mirrors the concreteness effect on translation priming observed in forward translation recognition, revealing that language statistics alone suffice to account for the bilingual concreteness advantage. Finally, experiment 3′s two ad-hoc analyses show that higher semantic alignment Rcs predict both greater processing efficiency and higher similarity ratings of translation pairs, with alignment’s predictability surpassing concreteness in bilinguals’ chronometric performance. Crucially, SAM’s prediction reflects the developmental hypothesis in translation recognition that concreteness cannot capture. These findings challenge the adequacy of purely feature-based bilingual conceptual representation models, which lack explicit mechanisms for how semantic features are learned and represented in the first place. Instead, our findings indicate a distributional view in which bilinguals tacitly track cross-linguistic contextual usage consistency that forms the basis for semantic overlap between translation equivalents in the bilingual mind. This study extends the tenet of distributional semantics to bilingualism, and underscores the dynamic, distributional nature of bilingual semantic memory.
扫码关注我们
求助内容:
应助结果提醒方式:
