The proliferation of online social platforms can greatly benefit people by fostering remote relationships, but it also inevitably amplifies the impact of multimodal fake news on societal trust and ethics. Existing fake-news detection AI systems are still vulnerable to the inconspicuous and indiscernible multimodal misinformation, and often lacking interpretability and accuracy in cross-platform settings. Hence, we propose a new innovative logical query-digging fake-news judgment system (LQ-FJS) to tackle the above problem based on multimodal approach. The LQ-FJS verifies the truthfulness of claims made within multimedia news by converting video content into structured textual summaries. It then acts as an interpretable agent, explaining the reasons for identified fake news by the structured video-summarization engine (SVSE) to act as an interpretable detection intermediary agent. The SVSE generates condensed captions for raw video content, converting it into structured textual narratives. Then, LQ-FJS exploits these condensed captions to retrieve reliable information related to the video content from LLM. Thus, LQ-FJS cross-verifies external knowledge sources and internal LLM responses to determine whether contradictions exist with factual information through a multimodal inconsistency verification procedure. Our experiments demonstrate that the subtle summarization produced by SVSE can facilitate the generation of explanatory reports that mitigate large-scale trust deficits caused by opaque “black-box” models. Our experiments show that LQ-FJS improves F1 scores by 4.5% and 7.2% compared to state-of-the-art models (FactLLaMA 2023 and HiSS 2023), and increases 14% user trusts through interpretable conclusions.
扫码关注我们
求助内容:
应助结果提醒方式:
