{"title":"HULLMI: Human vs LLM identification with explainability","authors":"Prathamesh Dinesh Joshi, Sahil Pocker, Raj Abhijit Dandekar, Rajat Dandekar, Sreedath Panat","doi":"arxiv-2409.04808","DOIUrl":null,"url":null,"abstract":"As LLMs become increasingly proficient at producing human-like responses,\nthere has been a rise of academic and industrial pursuits dedicated to flagging\na given piece of text as \"human\" or \"AI\". Most of these pursuits involve modern\nNLP detectors like T5-Sentinel and RoBERTa-Sentinel, without paying too much\nattention to issues of interpretability and explainability of these models. In\nour study, we provide a comprehensive analysis that shows that traditional ML\nmodels (Naive-Bayes,MLP, Random Forests, XGBoost) perform as well as modern NLP\ndetectors, in human vs AI text detection. We achieve this by implementing a\nrobust testing procedure on diverse datasets, including curated corpora and\nreal-world samples. Subsequently, by employing the explainable AI technique\nLIME, we uncover parts of the input that contribute most to the prediction of\neach model, providing insights into the detection process. Our study\ncontributes to the growing need for developing production-level LLM detection\ntools, which can leverage a wide range of traditional as well as modern NLP\ndetectors we propose. Finally, the LIME techniques we demonstrate also have the\npotential to equip these detection tools with interpretability analysis\nfeatures, making them more reliable and trustworthy in various domains like\neducation, healthcare, and media.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.04808","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As LLMs become increasingly proficient at producing human-like responses,
there has been a rise of academic and industrial pursuits dedicated to flagging
a given piece of text as "human" or "AI". Most of these pursuits involve modern
NLP detectors like T5-Sentinel and RoBERTa-Sentinel, without paying too much
attention to issues of interpretability and explainability of these models. In
our study, we provide a comprehensive analysis that shows that traditional ML
models (Naive-Bayes,MLP, Random Forests, XGBoost) perform as well as modern NLP
detectors, in human vs AI text detection. We achieve this by implementing a
robust testing procedure on diverse datasets, including curated corpora and
real-world samples. Subsequently, by employing the explainable AI technique
LIME, we uncover parts of the input that contribute most to the prediction of
each model, providing insights into the detection process. Our study
contributes to the growing need for developing production-level LLM detection
tools, which can leverage a wide range of traditional as well as modern NLP
detectors we propose. Finally, the LIME techniques we demonstrate also have the
potential to equip these detection tools with interpretability analysis
features, making them more reliable and trustworthy in various domains like
education, healthcare, and media.