Malsha Ashani Mahawatta Dona, Beatriz Cabrero-Daniel, Yinan Yu, Christian Berger
{"title":"Evaluating and Enhancing Trustworthiness of LLMs in Perception Tasks","authors":"Malsha Ashani Mahawatta Dona, Beatriz Cabrero-Daniel, Yinan Yu, Christian Berger","doi":"arxiv-2408.01433","DOIUrl":null,"url":null,"abstract":"Today's advanced driver assistance systems (ADAS), like adaptive cruise\ncontrol or rear collision warning, are finding broader adoption across vehicle\nclasses. Integrating such advanced, multimodal Large Language Models (LLMs) on\nboard a vehicle, which are capable of processing text, images, audio, and other\ndata types, may have the potential to greatly enhance passenger comfort. Yet,\nan LLM's hallucinations are still a major challenge to be addressed. In this\npaper, we systematically assessed potential hallucination detection strategies\nfor such LLMs in the context of object detection in vision-based data on the\nexample of pedestrian detection and localization. We evaluate three\nhallucination detection strategies applied to two state-of-the-art LLMs, the\nproprietary GPT-4V and the open LLaVA, on two datasets (Waymo/US and PREPER\nCITY/Sweden). Our results show that these LLMs can describe a traffic situation\nto an impressive level of detail but are still challenged for further analysis\nactivities such as object localization. We evaluate and extend hallucination\ndetection approaches when applying these LLMs to video sequences in the example\nof pedestrian detection. Our experiments show that, at the moment, the\nstate-of-the-art proprietary LLM performs much better than the open LLM.\nFurthermore, consistency enhancement techniques based on voting, such as the\nBest-of-Three (BO3) method, do not effectively reduce hallucinations in LLMs\nthat tend to exhibit high false negatives in detecting pedestrians. However,\nextending the hallucination detection by including information from the past\nhelps to improve results.","PeriodicalId":501168,"journal":{"name":"arXiv - CS - Emerging Technologies","volume":"47 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Emerging Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.01433","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Today's advanced driver assistance systems (ADAS), like adaptive cruise
control or rear collision warning, are finding broader adoption across vehicle
classes. Integrating such advanced, multimodal Large Language Models (LLMs) on
board a vehicle, which are capable of processing text, images, audio, and other
data types, may have the potential to greatly enhance passenger comfort. Yet,
an LLM's hallucinations are still a major challenge to be addressed. In this
paper, we systematically assessed potential hallucination detection strategies
for such LLMs in the context of object detection in vision-based data on the
example of pedestrian detection and localization. We evaluate three
hallucination detection strategies applied to two state-of-the-art LLMs, the
proprietary GPT-4V and the open LLaVA, on two datasets (Waymo/US and PREPER
CITY/Sweden). Our results show that these LLMs can describe a traffic situation
to an impressive level of detail but are still challenged for further analysis
activities such as object localization. We evaluate and extend hallucination
detection approaches when applying these LLMs to video sequences in the example
of pedestrian detection. Our experiments show that, at the moment, the
state-of-the-art proprietary LLM performs much better than the open LLM.
Furthermore, consistency enhancement techniques based on voting, such as the
Best-of-Three (BO3) method, do not effectively reduce hallucinations in LLMs
that tend to exhibit high false negatives in detecting pedestrians. However,
extending the hallucination detection by including information from the past
helps to improve results.