{"title":"SR-CIS:记忆与推理解耦的自反递增系统","authors":"Biqing Qi, Junqi Gao, Xinquan Chen, Dong Li, Weinan Zhang, Bowen Zhou","doi":"arxiv-2408.01970","DOIUrl":null,"url":null,"abstract":"The ability of humans to rapidly learn new knowledge while retaining old\nmemories poses a significant challenge for current deep learning models. To\nhandle this challenge, we draw inspiration from human memory and learning\nmechanisms and propose the Self-Reflective Complementary Incremental System\n(SR-CIS). Comprising the deconstructed Complementary Inference Module (CIM) and\nComplementary Memory Module (CMM), SR-CIS features a small model for fast\ninference and a large model for slow deliberation in CIM, enabled by the\nConfidence-Aware Online Anomaly Detection (CA-OAD) mechanism for efficient\ncollaboration. CMM consists of task-specific Short-Term Memory (STM) region and\na universal Long-Term Memory (LTM) region. By setting task-specific Low-Rank\nAdaptive (LoRA) and corresponding prototype weights and biases, it instantiates\nexternal storage for parameter and representation memory, thus deconstructing\nthe memory module from the inference module. By storing textual descriptions of\nimages during training and combining them with the Scenario Replay Module (SRM)\npost-training for memory combination, along with periodic short-to-long-term\nmemory restructuring, SR-CIS achieves stable incremental memory with limited\nstorage requirements. Balancing model plasticity and memory stability under\nconstraints of limited storage and low data resources, SR-CIS surpasses\nexisting competitive baselines on multiple standard and few-shot incremental\nlearning benchmarks.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SR-CIS: Self-Reflective Incremental System with Decoupled Memory and Reasoning\",\"authors\":\"Biqing Qi, Junqi Gao, Xinquan Chen, Dong Li, Weinan Zhang, Bowen Zhou\",\"doi\":\"arxiv-2408.01970\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The ability of humans to rapidly learn new knowledge while retaining old\\nmemories poses a significant challenge for current deep learning models. To\\nhandle this challenge, we draw inspiration from human memory and learning\\nmechanisms and propose the Self-Reflective Complementary Incremental System\\n(SR-CIS). Comprising the deconstructed Complementary Inference Module (CIM) and\\nComplementary Memory Module (CMM), SR-CIS features a small model for fast\\ninference and a large model for slow deliberation in CIM, enabled by the\\nConfidence-Aware Online Anomaly Detection (CA-OAD) mechanism for efficient\\ncollaboration. CMM consists of task-specific Short-Term Memory (STM) region and\\na universal Long-Term Memory (LTM) region. By setting task-specific Low-Rank\\nAdaptive (LoRA) and corresponding prototype weights and biases, it instantiates\\nexternal storage for parameter and representation memory, thus deconstructing\\nthe memory module from the inference module. By storing textual descriptions of\\nimages during training and combining them with the Scenario Replay Module (SRM)\\npost-training for memory combination, along with periodic short-to-long-term\\nmemory restructuring, SR-CIS achieves stable incremental memory with limited\\nstorage requirements. Balancing model plasticity and memory stability under\\nconstraints of limited storage and low data resources, SR-CIS surpasses\\nexisting competitive baselines on multiple standard and few-shot incremental\\nlearning benchmarks.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"2 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.01970\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.01970","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
SR-CIS: Self-Reflective Incremental System with Decoupled Memory and Reasoning
The ability of humans to rapidly learn new knowledge while retaining old
memories poses a significant challenge for current deep learning models. To
handle this challenge, we draw inspiration from human memory and learning
mechanisms and propose the Self-Reflective Complementary Incremental System
(SR-CIS). Comprising the deconstructed Complementary Inference Module (CIM) and
Complementary Memory Module (CMM), SR-CIS features a small model for fast
inference and a large model for slow deliberation in CIM, enabled by the
Confidence-Aware Online Anomaly Detection (CA-OAD) mechanism for efficient
collaboration. CMM consists of task-specific Short-Term Memory (STM) region and
a universal Long-Term Memory (LTM) region. By setting task-specific Low-Rank
Adaptive (LoRA) and corresponding prototype weights and biases, it instantiates
external storage for parameter and representation memory, thus deconstructing
the memory module from the inference module. By storing textual descriptions of
images during training and combining them with the Scenario Replay Module (SRM)
post-training for memory combination, along with periodic short-to-long-term
memory restructuring, SR-CIS achieves stable incremental memory with limited
storage requirements. Balancing model plasticity and memory stability under
constraints of limited storage and low data resources, SR-CIS surpasses
existing competitive baselines on multiple standard and few-shot incremental
learning benchmarks.