Elisa Marchetto, Hannah Eichhorn, Daniel Gallichan, Julia A Schnabel, Melanie Ganz
{"title":"运动伪影存在下图像质量度量与放射学评价的一致性。","authors":"Elisa Marchetto, Hannah Eichhorn, Daniel Gallichan, Julia A Schnabel, Melanie Ganz","doi":"","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>Reliable image quality assessment is crucial for evaluating new motion correction methods for magnetic resonance imaging. In this work, we compare the performance of commonly used reference-based and reference-free image quality metrics on a unique dataset with real motion artifacts. We further analyze the image quality metrics' robustness to typical pre-processing techniques.</p><p><strong>Methods: </strong>We compared five reference-based and five reference-free image quality metrics on data acquired with and without intentional motion (2D and 3D sequences). The metrics were recalculated seven times with varying pre-processing steps. The anonymized images were rated by radiologists and radiographers on a 1-5 Likert scale. Spearman correlation coefficients were computed to assess the relationship between image quality metrics and observer scores.</p><p><strong>Results: </strong>All reference-based image quality metrics showed strong correlation with observer assessments, with minor performance variations across sequences. Among reference-free metrics, Average Edge Strength offers the most promising results, as it consistently displayed stronger correlations across all sequences compared to the other reference-free metrics. Overall, the strongest correlation was achieved with percentile normalization and restricting the metric values to the skull-stripped brain region. In contrast, correlations were weaker when not applying any brain mask and using min-max or no normalization.</p><p><strong>Conclusion: </strong>Reference-based metrics reliably correlate with radiological evaluation across different sequences and datasets. Pre-processing steps, particularly normalization and brain masking, significantly influence the correlation values. Future research should focus on refining pre-processing techniques and exploring machine learning approaches for automated image quality evaluation.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11703327/pdf/","citationCount":"0","resultStr":"{\"title\":\"Agreement of Image Quality Metrics with Radiological Evaluation in the Presence of Motion Artifacts.\",\"authors\":\"Elisa Marchetto, Hannah Eichhorn, Daniel Gallichan, Julia A Schnabel, Melanie Ganz\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Purpose: </strong>Reliable image quality assessment is crucial for evaluating new motion correction methods for magnetic resonance imaging. In this work, we compare the performance of commonly used reference-based and reference-free image quality metrics on a unique dataset with real motion artifacts. We further analyze the image quality metrics' robustness to typical pre-processing techniques.</p><p><strong>Methods: </strong>We compared five reference-based and five reference-free image quality metrics on data acquired with and without intentional motion (2D and 3D sequences). The metrics were recalculated seven times with varying pre-processing steps. The anonymized images were rated by radiologists and radiographers on a 1-5 Likert scale. Spearman correlation coefficients were computed to assess the relationship between image quality metrics and observer scores.</p><p><strong>Results: </strong>All reference-based image quality metrics showed strong correlation with observer assessments, with minor performance variations across sequences. Among reference-free metrics, Average Edge Strength offers the most promising results, as it consistently displayed stronger correlations across all sequences compared to the other reference-free metrics. Overall, the strongest correlation was achieved with percentile normalization and restricting the metric values to the skull-stripped brain region. In contrast, correlations were weaker when not applying any brain mask and using min-max or no normalization.</p><p><strong>Conclusion: </strong>Reference-based metrics reliably correlate with radiological evaluation across different sequences and datasets. Pre-processing steps, particularly normalization and brain masking, significantly influence the correlation values. Future research should focus on refining pre-processing techniques and exploring machine learning approaches for automated image quality evaluation.</p>\",\"PeriodicalId\":93888,\"journal\":{\"name\":\"ArXiv\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-12-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11703327/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"ArXiv\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Agreement of Image Quality Metrics with Radiological Evaluation in the Presence of Motion Artifacts.
Purpose: Reliable image quality assessment is crucial for evaluating new motion correction methods for magnetic resonance imaging. In this work, we compare the performance of commonly used reference-based and reference-free image quality metrics on a unique dataset with real motion artifacts. We further analyze the image quality metrics' robustness to typical pre-processing techniques.
Methods: We compared five reference-based and five reference-free image quality metrics on data acquired with and without intentional motion (2D and 3D sequences). The metrics were recalculated seven times with varying pre-processing steps. The anonymized images were rated by radiologists and radiographers on a 1-5 Likert scale. Spearman correlation coefficients were computed to assess the relationship between image quality metrics and observer scores.
Results: All reference-based image quality metrics showed strong correlation with observer assessments, with minor performance variations across sequences. Among reference-free metrics, Average Edge Strength offers the most promising results, as it consistently displayed stronger correlations across all sequences compared to the other reference-free metrics. Overall, the strongest correlation was achieved with percentile normalization and restricting the metric values to the skull-stripped brain region. In contrast, correlations were weaker when not applying any brain mask and using min-max or no normalization.
Conclusion: Reference-based metrics reliably correlate with radiological evaluation across different sequences and datasets. Pre-processing steps, particularly normalization and brain masking, significantly influence the correlation values. Future research should focus on refining pre-processing techniques and exploring machine learning approaches for automated image quality evaluation.