Tal BaumelAri, Andre ManoelAri, Daniel JonesAri, Shize SuAri, Huseyin InanAri, AaronAri, Bornstein, Robert Sim
{"title":"Controllable Synthetic Clinical Note Generation with Privacy Guarantees","authors":"Tal BaumelAri, Andre ManoelAri, Daniel JonesAri, Shize SuAri, Huseyin InanAri, AaronAri, Bornstein, Robert Sim","doi":"arxiv-2409.07809","DOIUrl":null,"url":null,"abstract":"In the field of machine learning, domain-specific annotated data is an\ninvaluable resource for training effective models. However, in the medical\ndomain, this data often includes Personal Health Information (PHI), raising\nsignificant privacy concerns. The stringent regulations surrounding PHI limit\nthe availability and sharing of medical datasets, which poses a substantial\nchallenge for researchers and practitioners aiming to develop advanced machine\nlearning models. In this paper, we introduce a novel method to \"clone\" datasets\ncontaining PHI. Our approach ensures that the cloned datasets retain the\nessential characteristics and utility of the original data without compromising\npatient privacy. By leveraging differential-privacy techniques and a novel\nfine-tuning task, our method produces datasets that are free from identifiable\ninformation while preserving the statistical properties necessary for model\ntraining. We conduct utility testing to evaluate the performance of machine\nlearning models trained on the cloned datasets. The results demonstrate that\nour cloned datasets not only uphold privacy standards but also enhance model\nperformance compared to those trained on traditional anonymized datasets. This\nwork offers a viable solution for the ethical and effective utilization of\nsensitive medical data in machine learning, facilitating progress in medical\nresearch and the development of robust predictive models.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":"5 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07809","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In the field of machine learning, domain-specific annotated data is an
invaluable resource for training effective models. However, in the medical
domain, this data often includes Personal Health Information (PHI), raising
significant privacy concerns. The stringent regulations surrounding PHI limit
the availability and sharing of medical datasets, which poses a substantial
challenge for researchers and practitioners aiming to develop advanced machine
learning models. In this paper, we introduce a novel method to "clone" datasets
containing PHI. Our approach ensures that the cloned datasets retain the
essential characteristics and utility of the original data without compromising
patient privacy. By leveraging differential-privacy techniques and a novel
fine-tuning task, our method produces datasets that are free from identifiable
information while preserving the statistical properties necessary for model
training. We conduct utility testing to evaluate the performance of machine
learning models trained on the cloned datasets. The results demonstrate that
our cloned datasets not only uphold privacy standards but also enhance model
performance compared to those trained on traditional anonymized datasets. This
work offers a viable solution for the ethical and effective utilization of
sensitive medical data in machine learning, facilitating progress in medical
research and the development of robust predictive models.