Yuchen Li;Luxi Li;Zizhang Wu;Zhenshan Bing;Zhe Xuanyuan;Alois Christian Knoll;Long Chen
{"title":"UnstrPrompt:用于非结构化场景驾驶的大型语言模型提示","authors":"Yuchen Li;Luxi Li;Zizhang Wu;Zhenshan Bing;Zhe Xuanyuan;Alois Christian Knoll;Long Chen","doi":"10.1109/JRFID.2024.3367975","DOIUrl":null,"url":null,"abstract":"The integration of language descriptions or prompts with Large Language Models (LLMs) into visual tasks is currently a focal point in the advancement of autonomous driving. This study has showcased notable advancements across various standard datasets. Nevertheless, the progress in integrating language prompts faces challenges in unstructured scenarios, primarily due to the limited availability of paired data. To address this challenge, we introduce a groundbreaking language prompt set called “UnstrPrompt.” This prompt set is derived from three prominent unstructured autonomous driving datasets: IDD, ORFD, and AutoMine, collectively comprising a total of 6K language descriptions. In response to the distinctive features of unstructured scenarios, we have developed a structured approach for prompt generation, encompassing three key components: scene, road, and instance. Additionally, we provide a detailed overview of the language generation process and the validation procedures. We conduct tests on segmentation tasks, and our experiments have demonstrated that text-image fusion can improve accuracy by more than 3% on unstructured data. Additionally, our description architecture outperforms the generic urban architecture by more than 0.1%. This work holds the potential to advance various aspects such as interaction and foundational models in this scenario.","PeriodicalId":73291,"journal":{"name":"IEEE journal of radio frequency identification","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"UnstrPrompt: Large Language Model Prompt for Driving in Unstructured Scenarios\",\"authors\":\"Yuchen Li;Luxi Li;Zizhang Wu;Zhenshan Bing;Zhe Xuanyuan;Alois Christian Knoll;Long Chen\",\"doi\":\"10.1109/JRFID.2024.3367975\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The integration of language descriptions or prompts with Large Language Models (LLMs) into visual tasks is currently a focal point in the advancement of autonomous driving. This study has showcased notable advancements across various standard datasets. Nevertheless, the progress in integrating language prompts faces challenges in unstructured scenarios, primarily due to the limited availability of paired data. To address this challenge, we introduce a groundbreaking language prompt set called “UnstrPrompt.” This prompt set is derived from three prominent unstructured autonomous driving datasets: IDD, ORFD, and AutoMine, collectively comprising a total of 6K language descriptions. In response to the distinctive features of unstructured scenarios, we have developed a structured approach for prompt generation, encompassing three key components: scene, road, and instance. Additionally, we provide a detailed overview of the language generation process and the validation procedures. We conduct tests on segmentation tasks, and our experiments have demonstrated that text-image fusion can improve accuracy by more than 3% on unstructured data. Additionally, our description architecture outperforms the generic urban architecture by more than 0.1%. This work holds the potential to advance various aspects such as interaction and foundational models in this scenario.\",\"PeriodicalId\":73291,\"journal\":{\"name\":\"IEEE journal of radio frequency identification\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-02-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE journal of radio frequency identification\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10440443/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE journal of radio frequency identification","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10440443/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
UnstrPrompt: Large Language Model Prompt for Driving in Unstructured Scenarios
The integration of language descriptions or prompts with Large Language Models (LLMs) into visual tasks is currently a focal point in the advancement of autonomous driving. This study has showcased notable advancements across various standard datasets. Nevertheless, the progress in integrating language prompts faces challenges in unstructured scenarios, primarily due to the limited availability of paired data. To address this challenge, we introduce a groundbreaking language prompt set called “UnstrPrompt.” This prompt set is derived from three prominent unstructured autonomous driving datasets: IDD, ORFD, and AutoMine, collectively comprising a total of 6K language descriptions. In response to the distinctive features of unstructured scenarios, we have developed a structured approach for prompt generation, encompassing three key components: scene, road, and instance. Additionally, we provide a detailed overview of the language generation process and the validation procedures. We conduct tests on segmentation tasks, and our experiments have demonstrated that text-image fusion can improve accuracy by more than 3% on unstructured data. Additionally, our description architecture outperforms the generic urban architecture by more than 0.1%. This work holds the potential to advance various aspects such as interaction and foundational models in this scenario.