Text Style Transfer (TST) is a natural language processing (NLP) task involving the modification of textual style while upholding semantic content integrity. Current approaches mostly rely on the sentiment transfer task which involves modifying negative customer reviews to positive and vice versa. While this is important for evaluating the style modification and semantic consistency of TST systems, the application of such a task is questionable. TST has the potential to mitigate bias in real time comments. However, research in bias mitigation using TST is scarce in the literature. Current challenges faced by established frameworks include the maintenance of consistency and coherence when confronted with limited and noisy data especially when it comes to bias mitigation. Furthermore, performance is impeded by the scarcity of ample data necessary for data-driven AI frameworks. This research introduces a framework leveraging contrastive learning to transform dispersed input points into an embedding space. A triplet loss, augmented by an enhanced dual contrastive loss, is employed to distinguish between styles. We propose joint training of the masked language model with two dual contrastive style detectors and a sequence editor aimed at preserving content and modifying style simultaneously. Our model demonstrates a significant enhancement over existing baseline systems, attributing its success to the incorporation of dual contrastive learning with masked language modeling. Experimental results showcase the superior performance of the proposed system when compared to state-of-the-art models, as validated on two benchmark datasets through a series of experiments.
扫码关注我们
求助内容:
应助结果提醒方式:
