Ruochen Wang, Si Si, Felix Yu, Dorothea Wiesmann, Cho-Jui Hsieh, Inderjit Dhillon
{"title":"大型语言模型是可解释的学习者","authors":"Ruochen Wang, Si Si, Felix Yu, Dorothea Wiesmann, Cho-Jui Hsieh, Inderjit Dhillon","doi":"arxiv-2406.17224","DOIUrl":null,"url":null,"abstract":"The trade-off between expressiveness and interpretability remains a core\nchallenge when building human-centric predictive models for classification and\ndecision-making. While symbolic rules offer interpretability, they often lack\nexpressiveness, whereas neural networks excel in performance but are known for\nbeing black boxes. In this paper, we show a combination of Large Language\nModels (LLMs) and symbolic programs can bridge this gap. In the proposed\nLLM-based Symbolic Programs (LSPs), the pretrained LLM with natural language\nprompts provides a massive set of interpretable modules that can transform raw\ninput into natural language concepts. Symbolic programs then integrate these\nmodules into an interpretable decision rule. To train LSPs, we develop a\ndivide-and-conquer approach to incrementally build the program from scratch,\nwhere the learning process of each step is guided by LLMs. To evaluate the\neffectiveness of LSPs in extracting interpretable and accurate knowledge from\ndata, we introduce IL-Bench, a collection of diverse tasks, including both\nsynthetic and real-world scenarios across different modalities. Empirical\nresults demonstrate LSP's superior performance compared to traditional\nneurosymbolic programs and vanilla automatic prompt tuning methods. Moreover,\nas the knowledge learned by LSP is a combination of natural language\ndescriptions and symbolic rules, it is easily transferable to humans\n(interpretable), and other LLMs, and generalizes well to out-of-distribution\nsamples.","PeriodicalId":501033,"journal":{"name":"arXiv - CS - Symbolic Computation","volume":"63 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Large Language Models are Interpretable Learners\",\"authors\":\"Ruochen Wang, Si Si, Felix Yu, Dorothea Wiesmann, Cho-Jui Hsieh, Inderjit Dhillon\",\"doi\":\"arxiv-2406.17224\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The trade-off between expressiveness and interpretability remains a core\\nchallenge when building human-centric predictive models for classification and\\ndecision-making. While symbolic rules offer interpretability, they often lack\\nexpressiveness, whereas neural networks excel in performance but are known for\\nbeing black boxes. In this paper, we show a combination of Large Language\\nModels (LLMs) and symbolic programs can bridge this gap. In the proposed\\nLLM-based Symbolic Programs (LSPs), the pretrained LLM with natural language\\nprompts provides a massive set of interpretable modules that can transform raw\\ninput into natural language concepts. Symbolic programs then integrate these\\nmodules into an interpretable decision rule. To train LSPs, we develop a\\ndivide-and-conquer approach to incrementally build the program from scratch,\\nwhere the learning process of each step is guided by LLMs. To evaluate the\\neffectiveness of LSPs in extracting interpretable and accurate knowledge from\\ndata, we introduce IL-Bench, a collection of diverse tasks, including both\\nsynthetic and real-world scenarios across different modalities. Empirical\\nresults demonstrate LSP's superior performance compared to traditional\\nneurosymbolic programs and vanilla automatic prompt tuning methods. Moreover,\\nas the knowledge learned by LSP is a combination of natural language\\ndescriptions and symbolic rules, it is easily transferable to humans\\n(interpretable), and other LLMs, and generalizes well to out-of-distribution\\nsamples.\",\"PeriodicalId\":501033,\"journal\":{\"name\":\"arXiv - CS - Symbolic Computation\",\"volume\":\"63 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Symbolic Computation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2406.17224\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Symbolic Computation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2406.17224","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The trade-off between expressiveness and interpretability remains a core
challenge when building human-centric predictive models for classification and
decision-making. While symbolic rules offer interpretability, they often lack
expressiveness, whereas neural networks excel in performance but are known for
being black boxes. In this paper, we show a combination of Large Language
Models (LLMs) and symbolic programs can bridge this gap. In the proposed
LLM-based Symbolic Programs (LSPs), the pretrained LLM with natural language
prompts provides a massive set of interpretable modules that can transform raw
input into natural language concepts. Symbolic programs then integrate these
modules into an interpretable decision rule. To train LSPs, we develop a
divide-and-conquer approach to incrementally build the program from scratch,
where the learning process of each step is guided by LLMs. To evaluate the
effectiveness of LSPs in extracting interpretable and accurate knowledge from
data, we introduce IL-Bench, a collection of diverse tasks, including both
synthetic and real-world scenarios across different modalities. Empirical
results demonstrate LSP's superior performance compared to traditional
neurosymbolic programs and vanilla automatic prompt tuning methods. Moreover,
as the knowledge learned by LSP is a combination of natural language
descriptions and symbolic rules, it is easily transferable to humans
(interpretable), and other LLMs, and generalizes well to out-of-distribution
samples.