Zijian Dong, Yilei Wu, Zijiao Chen, Yichi Zhang, Yueming Jin, Juan Helen Zhou
{"title":"Prompt Your Brain: Scaffold Prompt Tuning for Efficient Adaptation of fMRI Pre-trained Model","authors":"Zijian Dong, Yilei Wu, Zijiao Chen, Yichi Zhang, Yueming Jin, Juan Helen Zhou","doi":"arxiv-2408.10567","DOIUrl":null,"url":null,"abstract":"We introduce Scaffold Prompt Tuning (ScaPT), a novel prompt-based framework\nfor adapting large-scale functional magnetic resonance imaging (fMRI)\npre-trained models to downstream tasks, with high parameter efficiency and\nimproved performance compared to fine-tuning and baselines for prompt tuning.\nThe full fine-tuning updates all pre-trained parameters, which may distort the\nlearned feature space and lead to overfitting with limited training data which\nis common in fMRI fields. In contrast, we design a hierarchical prompt\nstructure that transfers the knowledge learned from high-resource tasks to\nlow-resource ones. This structure, equipped with a Deeply-conditioned\nInput-Prompt (DIP) mapping module, allows for efficient adaptation by updating\nonly 2% of the trainable parameters. The framework enhances semantic\ninterpretability through attention mechanisms between inputs and prompts, and\nit clusters prompts in the latent space in alignment with prior knowledge.\nExperiments on public resting state fMRI datasets reveal ScaPT outperforms\nfine-tuning and multitask-based prompt tuning in neurodegenerative diseases\ndiagnosis/prognosis and personality trait prediction, even with fewer than 20\nparticipants. It highlights ScaPT's efficiency in adapting pre-trained fMRI\nmodels to low-resource tasks.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"27 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.10567","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We introduce Scaffold Prompt Tuning (ScaPT), a novel prompt-based framework
for adapting large-scale functional magnetic resonance imaging (fMRI)
pre-trained models to downstream tasks, with high parameter efficiency and
improved performance compared to fine-tuning and baselines for prompt tuning.
The full fine-tuning updates all pre-trained parameters, which may distort the
learned feature space and lead to overfitting with limited training data which
is common in fMRI fields. In contrast, we design a hierarchical prompt
structure that transfers the knowledge learned from high-resource tasks to
low-resource ones. This structure, equipped with a Deeply-conditioned
Input-Prompt (DIP) mapping module, allows for efficient adaptation by updating
only 2% of the trainable parameters. The framework enhances semantic
interpretability through attention mechanisms between inputs and prompts, and
it clusters prompts in the latent space in alignment with prior knowledge.
Experiments on public resting state fMRI datasets reveal ScaPT outperforms
fine-tuning and multitask-based prompt tuning in neurodegenerative diseases
diagnosis/prognosis and personality trait prediction, even with fewer than 20
participants. It highlights ScaPT's efficiency in adapting pre-trained fMRI
models to low-resource tasks.