Lainey Bukowiec, Martinus Megalla, Alexander Bartzokis, Hunter Hasley, Steven Carlson, John Koerner
{"title":"A response to comparison of different predicting models to assist the diagnosis of spinal lesions, Chu et al. 2021.","authors":"Lainey Bukowiec, Martinus Megalla, Alexander Bartzokis, Hunter Hasley, Steven Carlson, John Koerner","doi":"10.1080/17538157.2021.1994578","DOIUrl":null,"url":null,"abstract":"We read with great interest the article by Chu et al. While the prospect of employing machine learning for diagnostic purposes is exciting, we found several issues with the way the technique described in this paper was designed and executed. Although machine learning is a powerful classification tool, care must be taken to ensure that data is processed properly, as inappropriate models often lead to flawed results. We took particular issue with the K-fold cross-validation methodology; while this is a commonly used technique to reduce bias and improve model generalizability, a separate, untouched testing set must be used to generate final results. When used appropriately, K-fold cross validation can help researchers choose the best performing model and tune hyperparameters by using rotating partitions of the training set as an intermediate validation testing set. Performance on this testing fold can inform researchers of which model is likely to be the most accurate and generalizable. Chu et al. appear to have taken the average accuracy of their models’ performance on the testing fold and reported this as a final result. All data points in a testing data set should be new and unseen from the point of view of the model in order to draw a conclusion about a larger population. The methodology in this paper ran through testing iterations on data points that were also used as training data points in other folds, potentially overfitting the model to the training data and producing biased results. Furthermore, we felt that an unsatisfactory degree of detail regarding the models was included in this paper. The preprocessing and regularization step was not detailed and information on the underlying data is limited. For example, the clustering graph reduces the 46-dimensional data to two dimensions using unspecified functions. The choice of using clustering as a classification tool in a supervised learning problem is highly unconventional and no basis for this decision is given; the poor accuracy of the clustering model supports this assertion. The advantage of a neural network over more simple models, such as Support Vector Machine or Linear Regression, lies in its ability to generate non-linear classifications and its strong performance when paired with large, supervised data sets. The clustering graph seems to suggest that this data is linearly separable (supported by the high performance of LDA, a linear classifier) and the data set is small, raising questions regarding the choice of models. Beyond the technical limitations of this paper, there are inherent problems with the conceptual design of this technique. The conditions examined – herniated intervertebral disc, spondylolisthesis, spinal stenosis – can present with overlapping symptoms such as diffuse back pain, pain radiating down the legs, positional pain, to name a few. There is no pathognomonic combination of symptoms or demographic patient data that can lead to definitive diagnosis of any of these conditions without confirmatory imaging. Machine-learning algorithms are able to parse through data orders of magnitude faster than physicians. Higher throughput with more patients per unit of time will increase opportunities for overdiagnosis and may drive unnecessary imaging workups that increase cost. Even more concerning is the possibility of incorrectly mishandling an emergency spinal condition. Back pain can present as a result of a variety of conditions including the above, but also include severe diagnoses that require immediate action such as spinal cord compression or epidural spinal abscess. These emergency conditions are readily picked up by the discerning physician eye; however, using an algorithm that is inherently flawed to work up back pain and thus risk mishandling a medical emergency is highly concerning. We greatly appreciate the innovation of the authors and hope that they use our feedback and suggestions to ensure greater validity and improved application of this technique. INFORMATICS FOR HEALTH AND SOCIAL CARE 2022, VOL. 47, NO. 1, 120–121 https://doi.org/10.1080/17538157.2021.1994578","PeriodicalId":54984,"journal":{"name":"Informatics for Health & Social Care","volume":"47 1","pages":"120-121"},"PeriodicalIF":2.5000,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Informatics for Health & Social Care","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1080/17538157.2021.1994578","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0
Abstract
We read with great interest the article by Chu et al. While the prospect of employing machine learning for diagnostic purposes is exciting, we found several issues with the way the technique described in this paper was designed and executed. Although machine learning is a powerful classification tool, care must be taken to ensure that data is processed properly, as inappropriate models often lead to flawed results. We took particular issue with the K-fold cross-validation methodology; while this is a commonly used technique to reduce bias and improve model generalizability, a separate, untouched testing set must be used to generate final results. When used appropriately, K-fold cross validation can help researchers choose the best performing model and tune hyperparameters by using rotating partitions of the training set as an intermediate validation testing set. Performance on this testing fold can inform researchers of which model is likely to be the most accurate and generalizable. Chu et al. appear to have taken the average accuracy of their models’ performance on the testing fold and reported this as a final result. All data points in a testing data set should be new and unseen from the point of view of the model in order to draw a conclusion about a larger population. The methodology in this paper ran through testing iterations on data points that were also used as training data points in other folds, potentially overfitting the model to the training data and producing biased results. Furthermore, we felt that an unsatisfactory degree of detail regarding the models was included in this paper. The preprocessing and regularization step was not detailed and information on the underlying data is limited. For example, the clustering graph reduces the 46-dimensional data to two dimensions using unspecified functions. The choice of using clustering as a classification tool in a supervised learning problem is highly unconventional and no basis for this decision is given; the poor accuracy of the clustering model supports this assertion. The advantage of a neural network over more simple models, such as Support Vector Machine or Linear Regression, lies in its ability to generate non-linear classifications and its strong performance when paired with large, supervised data sets. The clustering graph seems to suggest that this data is linearly separable (supported by the high performance of LDA, a linear classifier) and the data set is small, raising questions regarding the choice of models. Beyond the technical limitations of this paper, there are inherent problems with the conceptual design of this technique. The conditions examined – herniated intervertebral disc, spondylolisthesis, spinal stenosis – can present with overlapping symptoms such as diffuse back pain, pain radiating down the legs, positional pain, to name a few. There is no pathognomonic combination of symptoms or demographic patient data that can lead to definitive diagnosis of any of these conditions without confirmatory imaging. Machine-learning algorithms are able to parse through data orders of magnitude faster than physicians. Higher throughput with more patients per unit of time will increase opportunities for overdiagnosis and may drive unnecessary imaging workups that increase cost. Even more concerning is the possibility of incorrectly mishandling an emergency spinal condition. Back pain can present as a result of a variety of conditions including the above, but also include severe diagnoses that require immediate action such as spinal cord compression or epidural spinal abscess. These emergency conditions are readily picked up by the discerning physician eye; however, using an algorithm that is inherently flawed to work up back pain and thus risk mishandling a medical emergency is highly concerning. We greatly appreciate the innovation of the authors and hope that they use our feedback and suggestions to ensure greater validity and improved application of this technique. INFORMATICS FOR HEALTH AND SOCIAL CARE 2022, VOL. 47, NO. 1, 120–121 https://doi.org/10.1080/17538157.2021.1994578
期刊介绍:
Informatics for Health & Social Care promotes evidence-based informatics as applied to the domain of health and social care. It showcases informatics research and practice within the many and diverse contexts of care; it takes personal information, both its direct and indirect use, as its central focus.
The scope of the Journal is broad, encompassing both the properties of care information and the life-cycle of associated information systems.
Consideration of the properties of care information will necessarily include the data itself, its representation, structure, and associated processes, as well as the context of its use, highlighting the related communication, computational, cognitive, social and ethical aspects.
Consideration of the life-cycle of care information systems includes full range from requirements, specifications, theoretical models and conceptual design through to sustainable implementations, and the valuation of impacts. Empirical evidence experiences related to implementation are particularly welcome.
Informatics in Health & Social Care seeks to consolidate and add to the core knowledge within the disciplines of Health and Social Care Informatics. The Journal therefore welcomes scientific papers, case studies and literature reviews. Examples of novel approaches are particularly welcome. Articles might, for example, show how care data is collected and transformed into useful and usable information, how informatics research is translated into practice, how specific results can be generalised, or perhaps provide case studies that facilitate learning from experience.