{"title":"用于低资源视觉语言生成的跨模态提示驱动网络","authors":"Yuena Jiang, Yanxun Chang","doi":"10.1016/j.engappai.2024.109591","DOIUrl":null,"url":null,"abstract":"<div><div>Image captioning is a classic vision-to-language generation task, which aims to generate a descriptive sentence to describe the input image, involving the understanding of the image and the generation of natural language. Conventional methods require a large-scale labeled dataset for training, which includes a large volume of image-caption pairs. However, for several application scenarios, <em>e.g.,</em> medicine and non-English, such plenty of image-caption pairs are usually not available. In this work, we propose the Cross-modal Prompt-Driven Network (XProDNet) to perform low-resource image captioning, which can generate accurate and comprehensive image captioning, with extremely limited data for training. We conduct experiments on (1) six benchmark datasets; (2) three application scenarios, <em>i.e.</em>, conventional image captioning, medical image captioning, and non-English image captioning; (3) four target languages, <em>i.e.</em>, English, Chinese, German, and French; (4) two experimental settings, <em>i.e.</em>, fully-supervised learning and few-shot learning. The extensive experiments prove the effectiveness of our approach, which can not only generate high-quality and comprehensive image captions but also significantly surpass previous state-of-the-art methods under both the few-shot learning and fully-supervised learning settings. The improved results suggest that our method has great potential for improving image captioning in real-world applications.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"139 ","pages":"Article 109591"},"PeriodicalIF":7.5000,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cross-modal Prompt-Driven Network for low-resource vision-to-language generation\",\"authors\":\"Yuena Jiang, Yanxun Chang\",\"doi\":\"10.1016/j.engappai.2024.109591\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Image captioning is a classic vision-to-language generation task, which aims to generate a descriptive sentence to describe the input image, involving the understanding of the image and the generation of natural language. Conventional methods require a large-scale labeled dataset for training, which includes a large volume of image-caption pairs. However, for several application scenarios, <em>e.g.,</em> medicine and non-English, such plenty of image-caption pairs are usually not available. In this work, we propose the Cross-modal Prompt-Driven Network (XProDNet) to perform low-resource image captioning, which can generate accurate and comprehensive image captioning, with extremely limited data for training. We conduct experiments on (1) six benchmark datasets; (2) three application scenarios, <em>i.e.</em>, conventional image captioning, medical image captioning, and non-English image captioning; (3) four target languages, <em>i.e.</em>, English, Chinese, German, and French; (4) two experimental settings, <em>i.e.</em>, fully-supervised learning and few-shot learning. The extensive experiments prove the effectiveness of our approach, which can not only generate high-quality and comprehensive image captions but also significantly surpass previous state-of-the-art methods under both the few-shot learning and fully-supervised learning settings. The improved results suggest that our method has great potential for improving image captioning in real-world applications.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"139 \",\"pages\":\"Article 109591\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-11-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197624017494\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197624017494","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Cross-modal Prompt-Driven Network for low-resource vision-to-language generation
Image captioning is a classic vision-to-language generation task, which aims to generate a descriptive sentence to describe the input image, involving the understanding of the image and the generation of natural language. Conventional methods require a large-scale labeled dataset for training, which includes a large volume of image-caption pairs. However, for several application scenarios, e.g., medicine and non-English, such plenty of image-caption pairs are usually not available. In this work, we propose the Cross-modal Prompt-Driven Network (XProDNet) to perform low-resource image captioning, which can generate accurate and comprehensive image captioning, with extremely limited data for training. We conduct experiments on (1) six benchmark datasets; (2) three application scenarios, i.e., conventional image captioning, medical image captioning, and non-English image captioning; (3) four target languages, i.e., English, Chinese, German, and French; (4) two experimental settings, i.e., fully-supervised learning and few-shot learning. The extensive experiments prove the effectiveness of our approach, which can not only generate high-quality and comprehensive image captions but also significantly surpass previous state-of-the-art methods under both the few-shot learning and fully-supervised learning settings. The improved results suggest that our method has great potential for improving image captioning in real-world applications.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.