首页 > 最新文献

ArXiv最新文献

英文 中文
The Brain Tumor Segmentation - Metastases (BraTS-METS) Challenge 2023: Brain Metastasis Segmentation on Pre-treatment MRI. 脑肿瘤分割(BraTS METS)挑战2023:治疗前MRI上的脑转移分割。
Pub Date : 2024-12-09
Ahmed W Moawad, Anastasia Janas, Ujjwal Baid, Divya Ramakrishnan, Rachit Saluja, Nader Ashraf, Nazanin Maleki, Leon Jekel, Nikolay Yordanov, Pascal Fehringer, Athanasios Gkampenis, Raisa Amiruddin, Amirreza Manteghinejad, Maruf Adewole, Jake Albrecht, Udunna Anazodo, Sanjay Aneja, Syed Muhammad Anwar, Timothy Bergquist, Veronica Chiang, Verena Chung, Gian Marco Conte, Farouk Dako, James Eddy, Ivan Ezhov, Nastaran Khalili, Keyvan Farahani, Juan Eugenio Iglesias, Zhifan Jiang, Elaine Johanson, Anahita Fathi Kazerooni, Florian Kofler, Kiril Krantchev, Dominic LaBella, Koen Van Leemput, Hongwei Bran Li, Marius George Linguraru, Xinyang Liu, Zeke Meier, Bjoern H Menze, Harrison Moy, Klara Osenberg, Marie Piraud, Zachary Reitman, Russell Takeshi Shinohara, Chunhao Wang, Benedikt Wiestler, Walter Wiggins, Umber Shafique, Klara Willms, Arman Avesta, Khaled Bousabarah, Satrajit Chakrabarty, Nicolo Gennaro, Wolfgang Holler, Manpreet Kaur, Pamela LaMontagne, MingDe Lin, Jan Lost, Daniel S Marcus, Ryan Maresca, Sarah Merkaj, Gabriel Cassinelli Pedersen, Marc von Reppert, Aristeidis Sotiras, Oleg Teytelboym, Niklas Tillmans, Malte Westerhoff, Ayda Youssef, Devon Godfrey, Scott Floyd, Andreas Rauschecker, Javier Villanueva-Meyer, Irada Pflüger, Jaeyoung Cho, Martin Bendszus, Gianluca Brugnara, Justin Cramer, Gloria J Guzman Perez-Carillo, Derek R Johnson, Anthony Kam, Benjamin Yin Ming Kwan, Lillian Lai, Neil U Lall, Fatima Memon, Mark Krycia, Satya Narayana Patro, Bojan Petrovic, Tiffany Y So, Gerard Thompson, Lei Wu, E Brooke Schrickel, Anu Bansal, Frederik Barkhof, Cristina Besada, Sammy Chu, Jason Druzgal, Alexandru Dusoi, Luciano Farage, Fabricio Feltrin, Amy Fong, Steve H Fung, R Ian Gray, Ichiro Ikuta, Michael Iv, Alida A Postma, Amit Mahajan, David Joyner, Chase Krumpelman, Laurent Letourneau-Guillon, Christie M Lincoln, Mate E Maros, Elka Miller, Fanny Esther A Morón, Esther A Nimchinsky, Ozkan Ozsarlak, Uresh Patel, Saurabh Rohatgi, Atin Saha, Anousheh Sayah, Eric D Schwartz, Robert Shih, Mark S Shiroishi, Juan E Small, Manoj Tanwar, Jewels Valerie, Brent D Weinberg, Matthew L White, Robert Young, Vahe M Zohrabian, Aynur Azizova, Melanie Maria Theresa Brüßeler, Mohanad Ghonim, Mohamed Ghonim, Abdullah Okar, Luca Pasquini, Yasaman Sharifi, Gagandeep Singh, Nico Sollmann, Theodora Soumala, Mahsa Taherzadeh, Philipp Vollmuth, Martha Foltyn-Dumitru, Ajay Malhotra, Aly H Abayazeed, Francesco Dellepiane, Philipp Lohmann, Víctor M Pérez-García, Hesham Elhalawani, Maria Correia de Verdier, Sanaria Al-Rubaiey, Rui Duarte Armindo, Kholod Ashraf, Moamen M Asla, Mohamed Badawy, Jeroen Bisschop, Nima Broomand Lomer, Jan Bukatz, Jim Chen, Petra Cimflova, Felix Corr, Alexis Crawley, Lisa Deptula, Tasneem Elakhdar, Islam H Shawali, Shahriar Faghani, Alexandra Frick, Vaibhav Gulati, Muhammad Ammar Haider, Fátima Hierro, Rasmus Holmboe Dahl, Sarah Maria Jacobs, Kuang-Chun Jim Hsieh, Sedat G Kandemirli, Katharina Kersting, Laura Kida, Sofia Kollia, Ioannis Koukoulithras, Xiao Li, Ahmed Abouelatta, Aya Mansour, Ruxandra-Catrinel Maria-Zamfirescu, Marcela Marsiglia, Yohana Sarahi Mateo-Camacho, Mark McArthur, Olivia McDonnell, Maire McHugh, Mana Moassefi, Samah Mostafa Morsi, Alexander Munteanu, Khanak K Nandolia, Syed Raza Naqvi, Yalda Nikanpour, Mostafa Alnoury, Abdullah Mohamed Aly Nouh, Francesca Pappafava, Markand D Patel, Samantha Petrucci, Eric Rawie, Scott Raymond, Borna Roohani, Sadeq Sabouhi, Laura M Sanchez-Garcia, Zoe Shaked, Pokhraj P Suthar, Talissa Altes, Edvin Isufi, Yaseen Dhemesh, Jaime Gass, Jonathan Thacker, Abdul Rahman Tarabishy, Benjamin Turner, Sebastiano Vacca, George K Vilanilam, Daniel Warren, David Weiss, Fikadu Worede, Sara Yousry, Wondwossen Lerebo, Alejandro Aristizabal, Alexandros Karargyris, Hasan Kassem, Sarthak Pati, Micah Sheller, Katherine E Evan Link, Evan Calabrese, Nourel Hoda Tahon, Ayman Nada, Yuri S Velichko, Spyridon Bakas, Jeffrey D Rudie, Mariam Aboian

The translation of AI-generated brain metastases (BM) segmentation into clinical practice relies heavily on diverse, high-quality annotated medical imaging datasets. The BraTS-METS 2023 challenge has gained momentum for testing and benchmarking algorithms using rigorously annotated internationally compiled real-world datasets. This study presents the results of the segmentation challenge and characterizes the challenging cases that impacted the performance of the winning algorithms. Untreated brain metastases on standard anatomic MRI sequences (T1, T2, FLAIR, T1PG) from eight contributed international datasets were annotated in stepwise method: published UNET algorithms, student, neuroradiologist, final approver neuroradiologist. Segmentations were ranked based on lesion-wise Dice and Hausdorff distance (HD95) scores. False positives (FP) and false negatives (FN) were rigorously penalized, receiving a score of 0 for Dice and a fixed penalty of 374 for HD95. The mean scores for the teams were calculated. Eight datasets comprising 1303 studies were annotated, with 402 studies (3076 lesions) released on Synapse as publicly available datasets to challenge competitors. Additionally, 31 studies (139 lesions) were held out for validation, and 59 studies (218 lesions) were used for testing. Segmentation accuracy was measured as rank across subjects, with the winning team achieving a LesionWise mean score of 7.9. The Dice score for the winning team was 0.65 ± 0.25. Common errors among the leading teams included false negatives for small lesions and misregistration of masks in space. The Dice scores and lesion detection rates of all algorithms diminished with decreasing tumor size, particularly for tumors smaller than 100 mm3. In conclusion, algorithms for BM segmentation require further refinement to balance high sensitivity in lesion detection with the minimization of false positives and negatives. The BraTS-METS 2023 challenge successfully curated well-annotated, diverse datasets and identified common errors, facilitating the translation of BM segmentation across varied clinical environments and providing personalized volumetric reports to patients undergoing BM treatment.

对转移到大脑的疾病进行临床监测可能是一个费力且耗时的过程,尤其是在涉及多个转移的情况下,当手动进行评估时。神经肿瘤脑转移反应评估(RANO-BM)指南利用一维最长直径,通常用于临床和研究环境,以评估脑转移患者的治疗反应。然而,对病变和周围病变水肿进行准确的体积评估在临床决策中具有重要意义,可以大大提高结果预测。对脑转移瘤进行分割的独特挑战在于它们作为小病变的常见情况。在先前的出版物中,对尺寸小于10mm的病变的检测和分割没有显示出高准确性。由于病变大小的显著可变性,脑转移挑战与之前在神经胶质瘤分割上进行的MICCAI挑战不同。与胶质瘤不同,胶质瘤在表现扫描时往往更大,脑转移瘤的大小范围很广,往往包括小的病变。我们希望BraTS-METS数据集和挑战将推动自动脑转移检测和分割领域的发展。
{"title":"The Brain Tumor Segmentation - Metastases (BraTS-METS) Challenge 2023: Brain Metastasis Segmentation on Pre-treatment MRI.","authors":"Ahmed W Moawad, Anastasia Janas, Ujjwal Baid, Divya Ramakrishnan, Rachit Saluja, Nader Ashraf, Nazanin Maleki, Leon Jekel, Nikolay Yordanov, Pascal Fehringer, Athanasios Gkampenis, Raisa Amiruddin, Amirreza Manteghinejad, Maruf Adewole, Jake Albrecht, Udunna Anazodo, Sanjay Aneja, Syed Muhammad Anwar, Timothy Bergquist, Veronica Chiang, Verena Chung, Gian Marco Conte, Farouk Dako, James Eddy, Ivan Ezhov, Nastaran Khalili, Keyvan Farahani, Juan Eugenio Iglesias, Zhifan Jiang, Elaine Johanson, Anahita Fathi Kazerooni, Florian Kofler, Kiril Krantchev, Dominic LaBella, Koen Van Leemput, Hongwei Bran Li, Marius George Linguraru, Xinyang Liu, Zeke Meier, Bjoern H Menze, Harrison Moy, Klara Osenberg, Marie Piraud, Zachary Reitman, Russell Takeshi Shinohara, Chunhao Wang, Benedikt Wiestler, Walter Wiggins, Umber Shafique, Klara Willms, Arman Avesta, Khaled Bousabarah, Satrajit Chakrabarty, Nicolo Gennaro, Wolfgang Holler, Manpreet Kaur, Pamela LaMontagne, MingDe Lin, Jan Lost, Daniel S Marcus, Ryan Maresca, Sarah Merkaj, Gabriel Cassinelli Pedersen, Marc von Reppert, Aristeidis Sotiras, Oleg Teytelboym, Niklas Tillmans, Malte Westerhoff, Ayda Youssef, Devon Godfrey, Scott Floyd, Andreas Rauschecker, Javier Villanueva-Meyer, Irada Pflüger, Jaeyoung Cho, Martin Bendszus, Gianluca Brugnara, Justin Cramer, Gloria J Guzman Perez-Carillo, Derek R Johnson, Anthony Kam, Benjamin Yin Ming Kwan, Lillian Lai, Neil U Lall, Fatima Memon, Mark Krycia, Satya Narayana Patro, Bojan Petrovic, Tiffany Y So, Gerard Thompson, Lei Wu, E Brooke Schrickel, Anu Bansal, Frederik Barkhof, Cristina Besada, Sammy Chu, Jason Druzgal, Alexandru Dusoi, Luciano Farage, Fabricio Feltrin, Amy Fong, Steve H Fung, R Ian Gray, Ichiro Ikuta, Michael Iv, Alida A Postma, Amit Mahajan, David Joyner, Chase Krumpelman, Laurent Letourneau-Guillon, Christie M Lincoln, Mate E Maros, Elka Miller, Fanny Esther A Morón, Esther A Nimchinsky, Ozkan Ozsarlak, Uresh Patel, Saurabh Rohatgi, Atin Saha, Anousheh Sayah, Eric D Schwartz, Robert Shih, Mark S Shiroishi, Juan E Small, Manoj Tanwar, Jewels Valerie, Brent D Weinberg, Matthew L White, Robert Young, Vahe M Zohrabian, Aynur Azizova, Melanie Maria Theresa Brüßeler, Mohanad Ghonim, Mohamed Ghonim, Abdullah Okar, Luca Pasquini, Yasaman Sharifi, Gagandeep Singh, Nico Sollmann, Theodora Soumala, Mahsa Taherzadeh, Philipp Vollmuth, Martha Foltyn-Dumitru, Ajay Malhotra, Aly H Abayazeed, Francesco Dellepiane, Philipp Lohmann, Víctor M Pérez-García, Hesham Elhalawani, Maria Correia de Verdier, Sanaria Al-Rubaiey, Rui Duarte Armindo, Kholod Ashraf, Moamen M Asla, Mohamed Badawy, Jeroen Bisschop, Nima Broomand Lomer, Jan Bukatz, Jim Chen, Petra Cimflova, Felix Corr, Alexis Crawley, Lisa Deptula, Tasneem Elakhdar, Islam H Shawali, Shahriar Faghani, Alexandra Frick, Vaibhav Gulati, Muhammad Ammar Haider, Fátima Hierro, Rasmus Holmboe Dahl, Sarah Maria Jacobs, Kuang-Chun Jim Hsieh, Sedat G Kandemirli, Katharina Kersting, Laura Kida, Sofia Kollia, Ioannis Koukoulithras, Xiao Li, Ahmed Abouelatta, Aya Mansour, Ruxandra-Catrinel Maria-Zamfirescu, Marcela Marsiglia, Yohana Sarahi Mateo-Camacho, Mark McArthur, Olivia McDonnell, Maire McHugh, Mana Moassefi, Samah Mostafa Morsi, Alexander Munteanu, Khanak K Nandolia, Syed Raza Naqvi, Yalda Nikanpour, Mostafa Alnoury, Abdullah Mohamed Aly Nouh, Francesca Pappafava, Markand D Patel, Samantha Petrucci, Eric Rawie, Scott Raymond, Borna Roohani, Sadeq Sabouhi, Laura M Sanchez-Garcia, Zoe Shaked, Pokhraj P Suthar, Talissa Altes, Edvin Isufi, Yaseen Dhemesh, Jaime Gass, Jonathan Thacker, Abdul Rahman Tarabishy, Benjamin Turner, Sebastiano Vacca, George K Vilanilam, Daniel Warren, David Weiss, Fikadu Worede, Sara Yousry, Wondwossen Lerebo, Alejandro Aristizabal, Alexandros Karargyris, Hasan Kassem, Sarthak Pati, Micah Sheller, Katherine E Evan Link, Evan Calabrese, Nourel Hoda Tahon, Ayman Nada, Yuri S Velichko, Spyridon Bakas, Jeffrey D Rudie, Mariam Aboian","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The translation of AI-generated brain metastases (BM) segmentation into clinical practice relies heavily on diverse, high-quality annotated medical imaging datasets. The BraTS-METS 2023 challenge has gained momentum for testing and benchmarking algorithms using rigorously annotated internationally compiled real-world datasets. This study presents the results of the segmentation challenge and characterizes the challenging cases that impacted the performance of the winning algorithms. Untreated brain metastases on standard anatomic MRI sequences (T1, T2, FLAIR, T1PG) from eight contributed international datasets were annotated in stepwise method: published UNET algorithms, student, neuroradiologist, final approver neuroradiologist. Segmentations were ranked based on lesion-wise Dice and Hausdorff distance (HD95) scores. False positives (FP) and false negatives (FN) were rigorously penalized, receiving a score of 0 for Dice and a fixed penalty of 374 for HD95. The mean scores for the teams were calculated. Eight datasets comprising 1303 studies were annotated, with 402 studies (3076 lesions) released on Synapse as publicly available datasets to challenge competitors. Additionally, 31 studies (139 lesions) were held out for validation, and 59 studies (218 lesions) were used for testing. Segmentation accuracy was measured as rank across subjects, with the winning team achieving a LesionWise mean score of 7.9. The Dice score for the winning team was 0.65 ± 0.25. Common errors among the leading teams included false negatives for small lesions and misregistration of masks in space. The Dice scores and lesion detection rates of all algorithms diminished with decreasing tumor size, particularly for tumors smaller than 100 mm3. In conclusion, algorithms for BM segmentation require further refinement to balance high sensitivity in lesion detection with the minimization of false positives and negatives. The BraTS-METS 2023 challenge successfully curated well-annotated, diverse datasets and identified common errors, facilitating the translation of BM segmentation across varied clinical environments and providing personalized volumetric reports to patients undergoing BM treatment.</p>","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10312806/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10113860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Statistics Predict the Sensitivity of Perceptual Quality Metrics. 解开图像统计与人类感知之间的联系。
Pub Date : 2024-12-02
Alexander Hepburn, Valero Laparra, Raúl Santos-Rodriguez, Jesús Malo

Previously, Barlow and Attneave hypothesised a link between biological vision and information maximisation. Following Shannon, information was defined using the probability of natural images. Several physiological and psychophysical phenomena have been derived from principles like info-max, efficient coding, or optimal denoising. However, it remains unclear how this link is expressed in mathematical terms from image probability. Classical derivations were subjected to strong assumptions on the probability models and on the behaviour of the sensors. Moreover, the direct evaluation of the hypothesis was limited by the inability of classical image models to deliver accurate estimates of the probability. Here, we directly evaluate image probabilities using a generative model for natural images, and analyse how probability-related factors can be combined to predict the sensitivity of state-of-the-art subjective image quality metrics, a proxy for human perception. We use information theory and regression analysis to find a simple model that when combining just two probability-related factors achieves 0.77 correlation with subjective metrics. This probability-based model is validated in two ways: through direct comparison with the opinion of real observers in a subjective quality experiment, and by reproducing basic trends of classical psychophysical facts such as the Contrast Sensitivity Function, the Weber-law, and contrast masking.

在20世纪50年代,Barlow和Attneave假设了生物视觉和信息最大化之间的联系。香农之后,利用自然图像的概率来定义信息。从那时起,许多生理和心理物理现象已经从信息最大化、有效编码或最佳去噪等原理中推导出来。然而,目前尚不清楚这种联系是如何从图像概率的数学术语中表达出来的。首先,经典推导受到概率模型和传感器行为的有力假设。此外,由于经典图像模型无法提供准确的概率估计,对该假设的直接评估受到限制。在这项工作中,我们使用先进的自然图像生成模型直接评估图像概率,并分析如何结合概率相关因素,通过最先进的主观图像质量指标的敏感性来预测人类感知。我们使用信息论和回归分析来找到两个概率相关因素的组合,这两个因素与主观指标的相关性达到0.8。通过再现对比敏感度函数的基本趋势、其超阈值变化以及韦伯定律和掩蔽的趋势,对这种基于概率的敏感度进行了心理物理验证。
{"title":"Image Statistics Predict the Sensitivity of Perceptual Quality Metrics.","authors":"Alexander Hepburn, Valero Laparra, Raúl Santos-Rodriguez, Jesús Malo","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Previously, Barlow and Attneave hypothesised a link between biological vision and information maximisation. Following Shannon, information was defined using the probability of natural images. Several physiological and psychophysical phenomena have been derived from principles like info-max, efficient coding, or optimal denoising. However, it remains unclear how this link is expressed in mathematical terms from image probability. Classical derivations were subjected to strong assumptions on the probability models and on the behaviour of the sensors. Moreover, the direct evaluation of the hypothesis was limited by the inability of classical image models to deliver accurate estimates of the probability. Here, we directly evaluate image probabilities using a generative model for natural images, and analyse how probability-related factors can be combined to predict the sensitivity of state-of-the-art subjective image quality metrics, a proxy for human perception. We use information theory and regression analysis to find a simple model that when combining just two probability-related factors achieves 0.77 correlation with subjective metrics. This probability-based model is validated in two ways: through direct comparison with the opinion of real observers in a subjective quality experiment, and by reproducing basic trends of classical psychophysical facts such as the Contrast Sensitivity Function, the Weber-law, and contrast masking.</p>","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10055489/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9573197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Brain Tumor Segmentation (BraTS) Challenge 2023: Brain MR Image Synthesis for Tumor Segmentation (BraSyn). 脑肿瘤分割(BraTS)挑战2023:用于肿瘤分割的脑MR图像合成(BraSyn)。
Pub Date : 2024-11-24
Hongwei Bran Li, Gian Marco Conte, Qingqiao Hu, Syed Muhammad Anwar, Florian Kofler, Ivan Ezhov, Koen van Leemput, Marie Piraud, Maria Diaz, Byrone Cole, Evan Calabrese, Jeff Rudie, Felix Meissen, Maruf Adewole, Anastasia Janas, Anahita Fathi Kazerooni, Dominic LaBella, Ahmed W Moawad, Keyvan Farahani, James Eddy, Timothy Bergquist, Verena Chung, Russell Takeshi Shinohara, Farouk Dako, Walter Wiggins, Zachary Reitman, Chunhao Wang, Xinyang Liu, Zhifan Jiang, Ariana Familiar, Elaine Johanson, Zeke Meier, Christos Davatzikos, John Freymann, Justin Kirby, Michel Bilello, Hassan M Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Rivka R Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-André Weber, Abhishek Mahajan, Suyash Mohan, John Mongan, Christopher Hess, Soonmee Cha, Javier Villanueva-Meyer, Errol Colak, Priscila Crivellaro, Andras Jakab, Jake Albrecht, Udunna Anazodo, Mariam Aboian, Thomas Yu, Verena Chung, Timothy Bergquist, James Eddy, Jake Albrecht, Ujjwal Baid, Spyridon Bakas, Marius George Linguraru, Bjoern Menze, Juan Eugenio Iglesias, Benedikt Wiestler

Automated brain tumor segmentation methods have become well-established and reached performance levels offering clear clinical utility. These methods typically rely on four input magnetic resonance imaging (MRI) modalities: T1-weighted images with and without contrast enhancement, T2-weighted images, and FLAIR images. However, some sequences are often missing in clinical practice due to time constraints or image artifacts, such as patient motion. Consequently, the ability to substitute missing modalities and gain segmentation performance is highly desirable and necessary for the broader adoption of these algorithms in the clinical routine. In this work, we present the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in conjunction with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023. The primary objective of this challenge is to evaluate image synthesis methods that can realistically generate missing MRI modalities when multiple available images are provided. The ultimate aim is to facilitate automated brain tumor segmentation pipelines. The image dataset used in the benchmark is diverse and multi-modal, created through collaboration with various hospitals and research institutions.

自动化的脑肿瘤分割方法已经建立起来,并达到了提供明确临床实用性的性能水平。这些方法通常依赖于四种输入磁共振成像(MRI)模式:具有和不具有对比度增强的T1加权图像、T2加权图像和FLAIR图像。然而,由于时间限制或图像伪影(如患者运动),一些序列在临床实践中经常缺失。因此,替换缺失模态和获得分割性能的能力对于在临床常规中更广泛地采用这些算法是非常理想和必要的。在这项工作中,我们提出了脑MR图像合成基准(BraSyn)与医学图像计算和计算机辅助干预(MICCAI)2023的建立。该挑战的主要目标是评估图像合成方法,该方法可以在提供多个可用图像时真实地生成缺失的MRI模态。最终目的是促进脑肿瘤自动分割管道。基准测试中使用的图像数据集是多种多样的,是通过与各种医院和研究机构合作创建的。
{"title":"The Brain Tumor Segmentation (BraTS) Challenge 2023: <i>Brain MR Image Synthesis for Tumor Segmentation (BraSyn)</i>.","authors":"Hongwei Bran Li, Gian Marco Conte, Qingqiao Hu, Syed Muhammad Anwar, Florian Kofler, Ivan Ezhov, Koen van Leemput, Marie Piraud, Maria Diaz, Byrone Cole, Evan Calabrese, Jeff Rudie, Felix Meissen, Maruf Adewole, Anastasia Janas, Anahita Fathi Kazerooni, Dominic LaBella, Ahmed W Moawad, Keyvan Farahani, James Eddy, Timothy Bergquist, Verena Chung, Russell Takeshi Shinohara, Farouk Dako, Walter Wiggins, Zachary Reitman, Chunhao Wang, Xinyang Liu, Zhifan Jiang, Ariana Familiar, Elaine Johanson, Zeke Meier, Christos Davatzikos, John Freymann, Justin Kirby, Michel Bilello, Hassan M Fathallah-Shaykh, Roland Wiest, Jan Kirschke, Rivka R Colen, Aikaterini Kotrotsou, Pamela Lamontagne, Daniel Marcus, Mikhail Milchenko, Arash Nazeri, Marc-André Weber, Abhishek Mahajan, Suyash Mohan, John Mongan, Christopher Hess, Soonmee Cha, Javier Villanueva-Meyer, Errol Colak, Priscila Crivellaro, Andras Jakab, Jake Albrecht, Udunna Anazodo, Mariam Aboian, Thomas Yu, Verena Chung, Timothy Bergquist, James Eddy, Jake Albrecht, Ujjwal Baid, Spyridon Bakas, Marius George Linguraru, Bjoern Menze, Juan Eugenio Iglesias, Benedikt Wiestler","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Automated brain tumor segmentation methods have become well-established and reached performance levels offering clear clinical utility. These methods typically rely on four input magnetic resonance imaging (MRI) modalities: T1-weighted images with and without contrast enhancement, T2-weighted images, and FLAIR images. However, some sequences are often missing in clinical practice due to time constraints or image artifacts, such as patient motion. Consequently, the ability to substitute missing modalities and gain segmentation performance is highly desirable and necessary for the broader adoption of these algorithms in the clinical routine. In this work, we present the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in conjunction with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023. The primary objective of this challenge is to evaluate image synthesis methods that can realistically generate missing MRI modalities when multiple available images are provided. The ultimate aim is to facilitate automated brain tumor segmentation pipelines. The image dataset used in the benchmark is diverse and multi-modal, created through collaboration with various hospitals and research institutions.</p>","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/ab/8e/nihpp-2305.09011v5.PMC10441440.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10426853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Matching Patients to Clinical Trials with Large Language Models. 用大型语言模型将患者与临床试验相匹配。
Pub Date : 2024-11-18
Qiao Jin, Zifeng Wang, Charalampos S Floudas, Fangyuan Chen, Changlin Gong, Dara Bracken-Clarke, Elisabetta Xue, Yifan Yang, Jimeng Sun, Zhiyong Lu

Patient recruitment is challenging for clinical trials. We introduce TrialGPT, an end-to-end framework for zero-shot patient-to-trial matching with large language models. TrialGPT comprises three modules: it first performs large-scale filtering to retrieve candidate trials (TrialGPT-Retrieval); then predicts criterion-level patient eligibility (TrialGPT-Matching); and finally generates trial-level scores (TrialGPT-Ranking). We evaluate TrialGPT on three cohorts of 183 synthetic patients with over 75,000 trial annotations. TrialGPT-Retrieval can recall over 90% of relevant trials using less than 6% of the initial collection. Manual evaluations on 1,015 patient-criterion pairs show that TrialGPT-Matching achieves an accuracy of 87.3% with faithful explanations, close to the expert performance. The TrialGPT-Ranking scores are highly correlated with human judgments and outperform the best-competing models by 43.8% in ranking and excluding trials. Furthermore, our user study reveals that TrialGPT can reduce the screening time by 42.6% in patient recruitment. Overall, these results have demonstrated promising opportunities for patient-to-trial matching with TriaGPT.

临床试验对推进药物开发和循证医学至关重要,但其成功往往受到患者招募挑战的阻碍。在这项工作中,我们研究了大型语言模型(LLM)的潜力,以帮助个体患者和转诊医生从广泛的选择中确定合适的临床试验。具体来说,我们介绍了TrialGPT,这是一种新的架构,使用LLM来预测标准级别的合格性,并提供详细的解释,然后根据免费文本患者笔记对这些解释进行汇总,以对候选临床试验进行排名和排除。我们在三个公开的184名患者队列和18238项注释临床试验中评估了TrialGPT。实验结果证明了几个关键发现:首先,TrialGPT通过忠实的解释实现了高标准级的预测精度。其次,综合试验水平的TrialGPT分数与专家资格注释高度相关。第三,这些分数被证明可以有效地对临床试验进行排名,并排除不合格的候选人。我们的错误分析表明,由于医学知识和特定领域的上下文理解有限,目前的LLM仍然会犯一些错误。尽管如此,我们相信LLM的解释能力是非常有价值的。未来有必要研究如何将此类人工智能助手集成到现实世界环境中的常规试验匹配工作流程中,以提高其效率。
{"title":"Matching Patients to Clinical Trials with Large Language Models.","authors":"Qiao Jin, Zifeng Wang, Charalampos S Floudas, Fangyuan Chen, Changlin Gong, Dara Bracken-Clarke, Elisabetta Xue, Yifan Yang, Jimeng Sun, Zhiyong Lu","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Patient recruitment is challenging for clinical trials. We introduce TrialGPT, an end-to-end framework for zero-shot patient-to-trial matching with large language models. TrialGPT comprises three modules: it first performs large-scale filtering to retrieve candidate trials (TrialGPT-Retrieval); then predicts criterion-level patient eligibility (TrialGPT-Matching); and finally generates trial-level scores (TrialGPT-Ranking). We evaluate TrialGPT on three cohorts of 183 synthetic patients with over 75,000 trial annotations. TrialGPT-Retrieval can recall over 90% of relevant trials using less than 6% of the initial collection. Manual evaluations on 1,015 patient-criterion pairs show that TrialGPT-Matching achieves an accuracy of 87.3% with faithful explanations, close to the expert performance. The TrialGPT-Ranking scores are highly correlated with human judgments and outperform the best-competing models by 43.8% in ranking and excluding trials. Furthermore, our user study reveals that TrialGPT can reduce the screening time by 42.6% in patient recruitment. Overall, these results have demonstrated promising opportunities for patient-to-trial matching with TriaGPT.</p>","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10418514/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10038202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Epithelial layer fluidization by curvature-induced unjamming. 由曲率引起的上皮层流化。
Pub Date : 2024-11-04
Margherita De Marzio, Amit Das, Jeffrey J Fredberg, Dapeng Bi

The transition of an epithelial layer from a stationary, quiescent state to a highly migratory, dynamic state is required for wound healing, development, and regeneration. This transition, known as the unjamming transition (UJT), is responsible for epithelial fluidization and collective migration. Previous theoretical models have primarily focused on the UJT in flat epithelial layers, neglecting the effects of strong surface curvature characteristic of the epithelium in vivo. In this study, we investigate the role of surface curvature on tissue plasticity and cellular migration using a vertex model embedded on a spherical surface. Our findings reveal that increasing curvature promotes the UJT by reducing the energy barriers to cellular rearrangements. Higher curvature favors cell intercalation, mobility, and self-diffusivity, resulting in epithelial structures that are malleable and migratory when small, but become more rigid and stationary as they grow. As such, the greater is the curvature the stronger becomes the tendency for curvature-induced unjamming to emerge as a novel mechanism promoting epithelial layer fluidization, malleability, and remodeling. Conversely, the lesser the curvature, as in tissue development and growth, the stronger becomes the tendency for jamming to emerge as a mechanism of progressive epithelial layer solidification and stabilization. Together, these results provide a conceptual framework to better understand how cell shape, cell propulsion, and tissue geometry contribute to tissue malleability, remodeling, and stabilization.

伤口愈合、发育和再生需要上皮层从静止状态过渡到高度迁移的动态状态。这种转变被称为 "解堵转变"(UJT),是上皮流动和集体迁移的原因。以往的理论模型主要关注平整上皮层中的 UJT,忽略了体内上皮特有的强表面曲率的影响。在本研究中,我们使用嵌入球形表面的顶点模型研究了表面曲率对组织可塑性和细胞迁移的作用。我们的研究结果表明,增加曲率可降低细胞重排的能量障碍,从而促进 UJT。较高的曲率有利于细胞的穿插、移动和自扩散,从而使上皮结构在小的时候具有延展性和迁移性,但在生长过程中变得更加僵硬和静止。因此,曲率越大,曲率诱导的解卡趋势就越强,成为促进上皮层流动性、延展性和重塑的新机制。相反,曲率越小,就像组织的发育和生长一样,作为上皮层逐渐固化和稳定的机制,出现卡阻的趋势就越强。这些结果为更好地理解细胞形状、细胞推进力和组织几何形状如何促进组织延展性、重塑和稳定提供了一个概念框架。
{"title":"Epithelial layer fluidization by curvature-induced unjamming.","authors":"Margherita De Marzio, Amit Das, Jeffrey J Fredberg, Dapeng Bi","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>The transition of an epithelial layer from a stationary, quiescent state to a highly migratory, dynamic state is required for wound healing, development, and regeneration. This transition, known as the unjamming transition (UJT), is responsible for epithelial fluidization and collective migration. Previous theoretical models have primarily focused on the UJT in flat epithelial layers, neglecting the effects of strong surface curvature characteristic of the epithelium in vivo. In this study, we investigate the role of surface curvature on tissue plasticity and cellular migration using a vertex model embedded on a spherical surface. Our findings reveal that increasing curvature promotes the UJT by reducing the energy barriers to cellular rearrangements. Higher curvature favors cell intercalation, mobility, and self-diffusivity, resulting in epithelial structures that are malleable and migratory when small, but become more rigid and stationary as they grow. As such, the greater is the curvature the stronger becomes the tendency for curvature-induced unjamming to emerge as a novel mechanism promoting epithelial layer fluidization, malleability, and remodeling. Conversely, the lesser the curvature, as in tissue development and growth, the stronger becomes the tendency for jamming to emerge as a mechanism of progressive epithelial layer solidification and stabilization. Together, these results provide a conceptual framework to better understand how cell shape, cell propulsion, and tissue geometry contribute to tissue malleability, remodeling, and stabilization.</p>","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10246082/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9608649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning temporal relationships between symbols with Laplace Neural Manifolds. 时间RL的基础。
Pub Date : 2024-09-22
Marc W Howard, Zahra Gh Esfahani, Bao Le, Per B Sederberg

Firing across populations of neurons in many regions of the mammalian brain maintains a temporal memory, a neural timeline of the recent past. Behavioral results demonstrate that people can both remember the past and anticipate the future over an analogous internal timeline. This paper presents a mathematical framework for building this timeline of the future. We assume that the input to the system is a time series of symbols-sparse tokenized representations of the present-in continuous time. The goal is to record pairwise temporal relationships between symbols over a wide range of time scales. We assume that the brain has access to a temporal memory in the form of the real Laplace transform. Hebbian associations with a diversity of synaptic time scales are formed between the past timeline and the present symbol. The associative memory stores the convolution between the past and the present. Knowing the temporal relationship between the past and the present allows one to infer relationships between the present and the future. With appropriate normalization, this Hebbian associative matrix can store a Laplace successor representation and a Laplace predecessor representation from which measures of temporal contingency can be evaluated. The diversity of synaptic time constants allows for learning of non-stationary statistics as well as joint statistics between triplets of symbols. This framework synthesizes a number of recent neuroscientific findings including results from dopamine neurons in the mesolimbic forebrain.

神经科学和心理学的最新进展表明,大脑可以获得过去和未来的时间线。在哺乳动物大脑许多区域的神经元群中进行Spiking可以保持强大的时间记忆,这是最近的神经时间线。行为结果表明,人们可以估计未来的扩展时间模型,这表明过去的神经时间线可以从现在延伸到未来。本文提出了一个数学框架,用于学习和表达连续时间内事件之间的关系。我们假设大脑可以访问最近过去的真实拉普拉斯变换形式的时间记忆。在过去和现在之间形成了具有多种突触时间尺度的Hebbian联想,记录了事件之间的时间关系。了解过去和现在之间的时间关系可以预测现在和未来之间的关系,从而构建对未来的扩展时间预测。过去的记忆和预测的未来都用真实的拉普拉斯变换来表示,用不同速率常数s索引的神经元群体的放电速率来表示。突触时间尺度的多样性允许在更大的试验历史时间尺度上进行时间记录。在这个框架中,可以通过拉普拉斯时间差来评估时间信用分配。拉普拉斯时间差将实际跟随刺激的未来与在观察到刺激之前预测的未来进行比较。这个计算框架做出了一些特定的神经生理学预测,综合起来,可以为RL的未来迭代提供基础,该迭代将时间记忆作为基本的构建块。
{"title":"Learning temporal relationships between symbols with Laplace Neural Manifolds.","authors":"Marc W Howard, Zahra Gh Esfahani, Bao Le, Per B Sederberg","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Firing across populations of neurons in many regions of the mammalian brain maintains a temporal memory, a neural timeline of the recent past. Behavioral results demonstrate that people can both remember the past and anticipate the future over an analogous internal timeline. This paper presents a mathematical framework for building this timeline of the future. We assume that the input to the system is a time series of symbols-sparse tokenized representations of the present-in continuous time. The goal is to record pairwise temporal relationships between symbols over a wide range of time scales. We assume that the brain has access to a temporal memory in the form of the real Laplace transform. Hebbian associations with a diversity of synaptic time scales are formed between the past timeline and the present symbol. The associative memory stores the convolution between the past and the present. Knowing the temporal relationship between the past and the present allows one to infer relationships between the present and the future. With appropriate normalization, this Hebbian associative matrix can store a Laplace successor representation and a Laplace predecessor representation from which measures of temporal contingency can be evaluated. The diversity of synaptic time constants allows for learning of non-stationary statistics as well as joint statistics between triplets of symbols. This framework synthesizes a number of recent neuroscientific findings including results from dopamine neurons in the mesolimbic forebrain.</p>","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9980275/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9113356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Probabilistic Genotype-Phenotype Maps Reveal Mutational Robustness of RNA Folding, Spin Glasses, and Quantum Circuits. 概率基因型表型图谱揭示了RNA折叠、自旋玻璃和量子电路的突变稳健性。
Pub Date : 2024-08-22
Anna Sappington, Vaibhav Mohanty

Recent studies of genotype-phenotype (GP) maps have reported universally enhanced phenotypic robustness to genotype mutations, a feature essential to evolution. Virtually all of these studies make a simplifying assumption that each genotype-represented as a sequence-maps deterministically to a single phenotype, such as a discrete structure. Here, we introduce probabilistic genotype-phenotype (PrGP) maps, where each genotype maps to a vector of phenotype probabilities, as a more realistic and universal language for investigating robustness in a variety of physical, biological, and computational systems. We study three model systems to show that PrGP maps offer a generalized framework which can handle uncertainty emerging from various physical sources: (1) thermal fluctuation in RNA folding, (2) external field disorder in spin glass ground state finding, and (3) superposition and entanglement in quantum circuits, which are realized experimentally on IBM quantum computers. In all three cases, we observe a novel biphasic robustness scaling which is enhanced relative to random expectation for more frequent phenotypes and approaches random expectation for less frequent phenotypes. We derive an analytical theory for the behavior of PrGP robustness, and we demonstrate that the theory is highly predictive of empirical robustness.

最近对基因型-表型(GP)图谱的研究报告称,表型对基因型突变的稳健性普遍增强,这是进化的一个重要特征。事实上,所有这些研究都做出了一个简化的假设,即每个基因型都决定性地映射到一个表型上。在这里,我们引入了概率基因型-表型(PrGP)图,其中每个基因型都映射到表型概率的载体,作为研究稳健性的更现实的框架。我们研究了三个模型系统,以表明我们的广义框架可以处理来自各种物理来源的不确定性:(1)RNA折叠中的热波动,(2)自旋玻璃基态发现中的外场无序,以及(3)量子电路中的叠加和纠缠,这些都是在7量子位IBM量子计算机上实验实现的。在所有三种情况下,我们都观察到一种新的双相鲁棒性标度,它相对于更频繁表型的随机预期有所增强,并接近不频繁表型的随意预期。
{"title":"Probabilistic Genotype-Phenotype Maps Reveal Mutational Robustness of RNA Folding, Spin Glasses, and Quantum Circuits.","authors":"Anna Sappington, Vaibhav Mohanty","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Recent studies of genotype-phenotype (GP) maps have reported universally enhanced phenotypic robustness to genotype mutations, a feature essential to evolution. Virtually all of these studies make a simplifying assumption that each genotype-represented as a sequence-maps deterministically to a single phenotype, such as a discrete structure. Here, we introduce probabilistic genotype-phenotype (PrGP) maps, where each genotype maps to a vector of phenotype probabilities, as a more realistic and universal language for investigating robustness in a variety of physical, biological, and computational systems. We study three model systems to show that PrGP maps offer a generalized framework which can handle uncertainty emerging from various physical sources: (1) thermal fluctuation in RNA folding, (2) external field disorder in spin glass ground state finding, and (3) superposition and entanglement in quantum circuits, which are realized experimentally on IBM quantum computers. In all three cases, we observe a novel biphasic robustness scaling which is enhanced relative to random expectation for more frequent phenotypes and approaches random expectation for less frequent phenotypes. We derive an analytical theory for the behavior of PrGP robustness, and we demonstrate that the theory is highly predictive of empirical robustness.</p>","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9882568/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10592904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliability of energy landscape analysis of resting-state functional MRI data. 静息状态功能MRI数据能量景观分析的可靠性。
Pub Date : 2024-08-20
Pitambar Khanra, Johan Nakuci, Sarah Muldoon, Takamitsu Watanabe, Naoki Masuda

Energy landscape analysis is a data-driven method to analyze multidimensional time series, including functional magnetic resonance imaging (fMRI) data. It has been shown to be a useful characterization of fMRI data in health and disease. It fits an Ising model to the data and captures the dynamics of the data as movement of a noisy ball constrained on the energy landscape derived from the estimated Ising model. In the present study, we examine test-retest reliability of the energy landscape analysis. To this end, we construct a permutation test that assesses whether or not indices characterizing the energy landscape are more consistent across different sets of scanning sessions from the same participant (i.e., within-participant reliability) than across different sets of sessions from different participants (i.e., between-participant reliability). We show that the energy landscape analysis has significantly higher within-participant than between-participant test-retest reliability with respect to four commonly used indices. We also show that a variational Bayesian method, which enables us to estimate energy landscapes tailored to each participant, displays comparable test-retest reliability to that using the conventional likelihood maximization method. The proposed methodology paves the way to perform individual-level energy landscape analysis for given data sets with a statistically controlled reliability.

能量景观分析是一种数据驱动的方法,用于分析多维时间序列,包括功能磁共振成像(fMRI)数据。它已被证明是功能磁共振成像数据在健康和疾病方面的有用表征。它将伊辛模型拟合到数据中,并将数据的动态捕捉为受估计的伊辛模型导出的能量景观约束的有噪球的运动。在本研究中,我们检验了能源景观分析的重新测试可靠性。为此,我们构建了一个排列测试,评估表征能量景观的指标在来自同一参与者的不同扫描会话集之间(即,参与者内部可靠性)是否比在来自不同参与者的不同会话集之间更一致(即,参与者之间可靠性)。我们发现,就四个常用指数而言,能量景观分析在参与者内部的重测可靠性显著高于参与者之间的重测信度。我们还表明,变分贝叶斯方法使我们能够估计为每个参与者量身定制的能源景观,它显示出与传统似然最大化方法相当的测试重测可靠性。所提出的方法为以统计控制的可靠性对给定数据集进行个体水平的能量景观分析铺平了道路。
{"title":"Reliability of energy landscape analysis of resting-state functional MRI data.","authors":"Pitambar Khanra, Johan Nakuci, Sarah Muldoon, Takamitsu Watanabe, Naoki Masuda","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Energy landscape analysis is a data-driven method to analyze multidimensional time series, including functional magnetic resonance imaging (fMRI) data. It has been shown to be a useful characterization of fMRI data in health and disease. It fits an Ising model to the data and captures the dynamics of the data as movement of a noisy ball constrained on the energy landscape derived from the estimated Ising model. In the present study, we examine test-retest reliability of the energy landscape analysis. To this end, we construct a permutation test that assesses whether or not indices characterizing the energy landscape are more consistent across different sets of scanning sessions from the same participant (i.e., within-participant reliability) than across different sets of sessions from different participants (i.e., between-participant reliability). We show that the energy landscape analysis has significantly higher within-participant than between-participant test-retest reliability with respect to four commonly used indices. We also show that a variational Bayesian method, which enables us to estimate energy landscapes tailored to each participant, displays comparable test-retest reliability to that using the conventional likelihood maximization method. The proposed methodology paves the way to perform individual-level energy landscape analysis for given data sets with a statistically controlled reliability.</p>","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10312792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10143764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Dynamic Sensorium competition for predicting large-scale mouse visual cortex activity from videos. 从视频中预测大规模小鼠视觉皮层活动的动态感官竞赛。
Pub Date : 2024-07-12
Polina Turishcheva, Paul G Fahey, Michaela Vystrčilová, Laura Hansel, Rachel Froebe, Kayla Ponder, Yongrong Qiu, Konstantin F Willeke, Mohammad Bashiri, Eric Wang, Zhiwei Ding, Andreas S Tolias, Fabian H Sinz, Alexander S Ecker

Understanding how biological visual systems process information is challenging due to the complex nonlinear relationship between neuronal responses and high-dimensional visual input. Artificial neural networks have already improved our understanding of this system by allowing computational neuroscientists to create predictive models and bridge biological and machine vision. During the Sensorium 2022 competition, we introduced benchmarks for vision models with static input (i.e. images). However, animals operate and excel in dynamic environments, making it crucial to study and understand how the brain functions under these conditions. Moreover, many biological theories, such as predictive coding, suggest that previous input is crucial for current input processing. Currently, there is no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we propose the Sensorium 2023 Benchmark Competition with dynamic input (https://www.sensorium-competition.net/). This competition includes the collection of a new large-scale dataset from the primary visual cortex of ten mice, containing responses from over 78,000 neurons to over 2 hours of dynamic stimuli per neuron. Participants in the main benchmark track will compete to identify the best predictive models of neuronal responses for dynamic input (i.e. video). We will also host a bonus track in which submission performance will be evaluated on out-of-domain input, using withheld neuronal responses to dynamic input stimuli whose statistics differ from the training set. Both tracks will offer behavioral data along with video stimuli. As before, we will provide code, tutorials, and strong pre-trained baseline models to encourage participation. We hope this competition will continue to strengthen the accompanying Sensorium benchmarks collection as a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.

由于神经元反应和高维视觉输入之间的复杂非线性关系,理解生物视觉系统如何处理信息具有挑战性。人工神经网络已经通过允许计算神经科学家创建预测模型并桥接生物和机器视觉,提高了我们对该系统的理解。在Sensorium 2022比赛期间,我们引入了具有静态输入(即图像)的视觉模型基准。然而,动物在动态环境中运作并表现出色,因此研究和理解大脑在这些条件下的功能至关重要。此外,许多生物学理论,如预测编码,表明先前的输入对当前的输入处理至关重要。目前,还没有标准化的基准来识别鼠标视觉系统的最先进的动态模型。为了解决这一差距,我们提出了具有动态输入的Sensorium 2023基准竞赛(https://www.sensorium-competition.net/)。这项比赛包括从五只小鼠的初级视觉皮层收集一个新的大规模数据集,其中包含38000多个神经元对每个神经元超过2小时的动态刺激的反应。主要基准赛道的参与者将竞争确定用于动态输入(即视频)的神经元反应的最佳预测模型。我们还将主持一个奖励跟踪,在该跟踪中,将使用对统计数据与训练集不同的动态输入刺激的抑制神经元反应,对域外输入的提交性能进行评估。两首曲目都将提供行为数据和视频刺激。和以前一样,我们将提供代码、教程和强大的预训练基线模型,以鼓励参与。我们希望这场比赛将继续加强附带的Sensorium基准集合,作为衡量整个鼠标视觉层次及其他层次的大规模神经系统识别模型进展的标准工具。
{"title":"The Dynamic Sensorium competition for predicting large-scale mouse visual cortex activity from videos.","authors":"Polina Turishcheva, Paul G Fahey, Michaela Vystrčilová, Laura Hansel, Rachel Froebe, Kayla Ponder, Yongrong Qiu, Konstantin F Willeke, Mohammad Bashiri, Eric Wang, Zhiwei Ding, Andreas S Tolias, Fabian H Sinz, Alexander S Ecker","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Understanding how biological visual systems process information is challenging due to the complex nonlinear relationship between neuronal responses and high-dimensional visual input. Artificial neural networks have already improved our understanding of this system by allowing computational neuroscientists to create predictive models and bridge biological and machine vision. During the Sensorium 2022 competition, we introduced benchmarks for vision models with static input (i.e. images). However, animals operate and excel in dynamic environments, making it crucial to study and understand how the brain functions under these conditions. Moreover, many biological theories, such as predictive coding, suggest that previous input is crucial for current input processing. Currently, there is no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we propose the Sensorium 2023 Benchmark Competition with dynamic input (https://www.sensorium-competition.net/). This competition includes the collection of a new large-scale dataset from the primary visual cortex of ten mice, containing responses from over 78,000 neurons to over 2 hours of dynamic stimuli per neuron. Participants in the main benchmark track will compete to identify the best predictive models of neuronal responses for dynamic input (i.e. video). We will also host a bonus track in which submission performance will be evaluated on out-of-domain input, using withheld neuronal responses to dynamic input stimuli whose statistics differ from the training set. Both tracks will offer behavioral data along with video stimuli. As before, we will provide code, tutorials, and strong pre-trained baseline models to encourage participation. We hope this competition will continue to strengthen the accompanying Sensorium benchmarks collection as a standard tool to measure progress in large-scale neural system identification models of the entire mouse visual hierarchy and beyond.</p>","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/22/e8/nihpp-2305.19654v1.PMC10312815.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9814779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A dual benchmarking study of facial forgery and facial forensics 面部伪造和面部取证的双重基准研究
Pub Date : 2024-07-05 DOI: 10.1049/cit2.12362
Minh Tam Pham, T. T. Huynh, Vinh Tong, T. Nguyen, T. Nguyen, Hongzhi Yin, Q. Nguyen
In recent years, visual facial forgery has reached a level of sophistication that humans cannot identify fraud, which poses a significant threat to information security. A wide range of malicious applications have emerged, such as deepfake, fake news, defamation or blackmailing of celebrities, impersonation of politicians in political warfare, and the spreading of rumours to attract views. As a result, a rich body of visual forensic techniques has been proposed in an attempt to stop this dangerous trend. However, there is no comprehensive, fair, and unified performance evaluation to enlighten the community on best performing methods. The authors present a systematic benchmark beyond traditional surveys that provides in‐depth insights into facial forgery and facial forensics, grounding on robustness tests such as contrast, brightness, noise, resolution, missing information, and compression. The authors also provide a practical guideline of the benchmarking results, to determine the characteristics of the methods that serve as a comparative reference in this never‐ending war between measures and countermeasures. The authors’ source code is open to the public.
近年来,视觉面部伪造已经达到了人类无法识别欺诈的复杂程度,这对信息安全构成了重大威胁。各种恶意应用层出不穷,如深度伪造、假新闻、诽谤或勒索名人、在政治战争中假冒政客,以及散布谣言以吸引眼球。因此,人们提出了大量视觉取证技术,试图阻止这一危险趋势。然而,目前还没有一个全面、公平、统一的性能评估来帮助人们了解性能最佳的方法。作者提出了一个超越传统调查的系统性基准,以对比度、亮度、噪声、分辨率、缺失信息和压缩等鲁棒性测试为基础,深入剖析了面部伪造和面部取证问题。作者还提供了基准测试结果的实用指南,以确定各种方法的特点,在这场永无休止的措施与对策之战中作为比较参考。作者的源代码对公众开放。
{"title":"A dual benchmarking study of facial forgery and facial forensics","authors":"Minh Tam Pham, T. T. Huynh, Vinh Tong, T. Nguyen, T. Nguyen, Hongzhi Yin, Q. Nguyen","doi":"10.1049/cit2.12362","DOIUrl":"https://doi.org/10.1049/cit2.12362","url":null,"abstract":"In recent years, visual facial forgery has reached a level of sophistication that humans cannot identify fraud, which poses a significant threat to information security. A wide range of malicious applications have emerged, such as deepfake, fake news, defamation or blackmailing of celebrities, impersonation of politicians in political warfare, and the spreading of rumours to attract views. As a result, a rich body of visual forensic techniques has been proposed in an attempt to stop this dangerous trend. However, there is no comprehensive, fair, and unified performance evaluation to enlighten the community on best performing methods. The authors present a systematic benchmark beyond traditional surveys that provides in‐depth insights into facial forgery and facial forensics, grounding on robustness tests such as contrast, brightness, noise, resolution, missing information, and compression. The authors also provide a practical guideline of the benchmarking results, to determine the characteristics of the methods that serve as a comparative reference in this never‐ending war between measures and countermeasures. The authors’ source code is open to the public.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":" 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141674328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
ArXiv
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1