Pramod Shinde, Lisa Willemsen, Michael Anderson, Minori Aoki, Saonli Basu, Julie G Burel, Peng Cheng, Souradipto Ghosh Dastidar, Aidan Dunleavy, Tal Einav, Jamie Forschmiedt, Slim Fourati, Javier Garcia, William Gibson, Jason A Greenbaum, Leying Guan, Weikang Guan, Jeremy P Gygi, Brendan Ha, Joe Hou, Jason Hsiao, Yunda Huang, Rick Jansen, Bhargob Kakoty, Zhiyu Kang, James J Kobie, Mari Kojima, Anna Konstorum, Jiyeun Lee, Sloan A Lewis, Aixin Li, Eric F Lock, Jarjapu Mahita, Marcus Mendes, Hailong Meng, Aidan Neher, Somayeh Nili, Shelby Orfield, James Overton, Nidhi Pai, Cokie Parker, Brian Qian, Mikkel Rasmussen, Joaquin Reyna, Eve Richardson, Sandra Safo, Josey Sorenson, Aparna Srinivasan, Nicky Thrupp, Rashmi Tippalagama, Raphael Trevizani, Steffen Ventz, Jiuzhou Wang, Cheng-Chang Wu, Ferhat Ay, Barry Grant, Steven H Kleinstein, Bjoern Peters
{"title":"检验免疫力计算模型--预测百日咳疫苗接种结果的特邀挑战赛","authors":"Pramod Shinde, Lisa Willemsen, Michael Anderson, Minori Aoki, Saonli Basu, Julie G Burel, Peng Cheng, Souradipto Ghosh Dastidar, Aidan Dunleavy, Tal Einav, Jamie Forschmiedt, Slim Fourati, Javier Garcia, William Gibson, Jason A Greenbaum, Leying Guan, Weikang Guan, Jeremy P Gygi, Brendan Ha, Joe Hou, Jason Hsiao, Yunda Huang, Rick Jansen, Bhargob Kakoty, Zhiyu Kang, James J Kobie, Mari Kojima, Anna Konstorum, Jiyeun Lee, Sloan A Lewis, Aixin Li, Eric F Lock, Jarjapu Mahita, Marcus Mendes, Hailong Meng, Aidan Neher, Somayeh Nili, Shelby Orfield, James Overton, Nidhi Pai, Cokie Parker, Brian Qian, Mikkel Rasmussen, Joaquin Reyna, Eve Richardson, Sandra Safo, Josey Sorenson, Aparna Srinivasan, Nicky Thrupp, Rashmi Tippalagama, Raphael Trevizani, Steffen Ventz, Jiuzhou Wang, Cheng-Chang Wu, Ferhat Ay, Barry Grant, Steven H Kleinstein, Bjoern Peters","doi":"10.1101/2024.09.04.611290","DOIUrl":null,"url":null,"abstract":"Systems vaccinology studies have been used to build computational models that predict individual vaccine responses and identify the factors contributing to differences in outcome. Comparing such models is challenging due to variability in study designs. To address this, we established a community resource to compare models predicting B. pertussis booster responses and generate experimental data for the explicit purpose of model evaluation. We here describe our second computational prediction challenge using this resource, where we benchmarked 49 algorithms from 53 scientists. We found that the most successful models stood out in their handling of nonlinearities, reducing large feature sets to representative subsets, and advanced data preprocessing. In contrast, we found that models adopted from literature that were developed to predict vaccine antibody responses in other settings performed poorly, reinforcing the need for purpose-built models. Overall, this demonstrates the value of purpose-generated datasets for rigorous and open model evaluations to identify features that improve the reliability and applicability of computational models in vaccine response prediction.","PeriodicalId":501182,"journal":{"name":"bioRxiv - Immunology","volume":"26 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Putting computational models of immunity to the test - an invited challenge to predict B. pertussis vaccination outcomes\",\"authors\":\"Pramod Shinde, Lisa Willemsen, Michael Anderson, Minori Aoki, Saonli Basu, Julie G Burel, Peng Cheng, Souradipto Ghosh Dastidar, Aidan Dunleavy, Tal Einav, Jamie Forschmiedt, Slim Fourati, Javier Garcia, William Gibson, Jason A Greenbaum, Leying Guan, Weikang Guan, Jeremy P Gygi, Brendan Ha, Joe Hou, Jason Hsiao, Yunda Huang, Rick Jansen, Bhargob Kakoty, Zhiyu Kang, James J Kobie, Mari Kojima, Anna Konstorum, Jiyeun Lee, Sloan A Lewis, Aixin Li, Eric F Lock, Jarjapu Mahita, Marcus Mendes, Hailong Meng, Aidan Neher, Somayeh Nili, Shelby Orfield, James Overton, Nidhi Pai, Cokie Parker, Brian Qian, Mikkel Rasmussen, Joaquin Reyna, Eve Richardson, Sandra Safo, Josey Sorenson, Aparna Srinivasan, Nicky Thrupp, Rashmi Tippalagama, Raphael Trevizani, Steffen Ventz, Jiuzhou Wang, Cheng-Chang Wu, Ferhat Ay, Barry Grant, Steven H Kleinstein, Bjoern Peters\",\"doi\":\"10.1101/2024.09.04.611290\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Systems vaccinology studies have been used to build computational models that predict individual vaccine responses and identify the factors contributing to differences in outcome. Comparing such models is challenging due to variability in study designs. To address this, we established a community resource to compare models predicting B. pertussis booster responses and generate experimental data for the explicit purpose of model evaluation. We here describe our second computational prediction challenge using this resource, where we benchmarked 49 algorithms from 53 scientists. We found that the most successful models stood out in their handling of nonlinearities, reducing large feature sets to representative subsets, and advanced data preprocessing. In contrast, we found that models adopted from literature that were developed to predict vaccine antibody responses in other settings performed poorly, reinforcing the need for purpose-built models. Overall, this demonstrates the value of purpose-generated datasets for rigorous and open model evaluations to identify features that improve the reliability and applicability of computational models in vaccine response prediction.\",\"PeriodicalId\":501182,\"journal\":{\"name\":\"bioRxiv - Immunology\",\"volume\":\"26 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"bioRxiv - Immunology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.09.04.611290\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"bioRxiv - Immunology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.09.04.611290","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Putting computational models of immunity to the test - an invited challenge to predict B. pertussis vaccination outcomes
Systems vaccinology studies have been used to build computational models that predict individual vaccine responses and identify the factors contributing to differences in outcome. Comparing such models is challenging due to variability in study designs. To address this, we established a community resource to compare models predicting B. pertussis booster responses and generate experimental data for the explicit purpose of model evaluation. We here describe our second computational prediction challenge using this resource, where we benchmarked 49 algorithms from 53 scientists. We found that the most successful models stood out in their handling of nonlinearities, reducing large feature sets to representative subsets, and advanced data preprocessing. In contrast, we found that models adopted from literature that were developed to predict vaccine antibody responses in other settings performed poorly, reinforcing the need for purpose-built models. Overall, this demonstrates the value of purpose-generated datasets for rigorous and open model evaluations to identify features that improve the reliability and applicability of computational models in vaccine response prediction.