{"title":"将智能手机照片的计算机视觉技术纳入炎症性关节炎筛查:印度患者队列得出的结果","authors":"Sanat Phatak, Ruchil Saptarshi, Vanshaj Sharma, Rohan Shah, Abhishek Zanwar, Pratiksha Hegde, Somashree Chakraborty, Pranay Goel","doi":"10.1101/2024.08.19.24312283","DOIUrl":null,"url":null,"abstract":"Background: Convolutional neural networks (CNNs) have been used to classify medical images; few studies use smartphone photographs that are scalable at point of care. We previously showed proof of principle that CNNs could detect inflammatory arthritis in three hand joints. We now studied a screening CNN to differentiate from controls. Methods: We studied consecutive patients with early inflammatory arthritis and healthy controls, all examined by a rheumatologist (15% by two). Standardized photographs of the hands were taken using a studio box, anonymized, and cropped around joints. We fine-tuned pre-trained CNN models on our dataset (80% training; 20% test set). We used an Inception-ResNet-v2 backbone CNN modified for two class outputs (Patient vs Control) on uncropped photos. Inception-ResNet-v2 CNNs were trained on cropped photos of Middle finger Proximal Interphalangeal (MFPIP), Index finger PIP (IFPIP) and wrist. We report representative values of accuracy, sensitivity, specificity. Results: We studied 800 hands from 200 controls (mean age 37.8 years) and 200 patients (mean age 49.6 years; 134 with rheumatoid arthritis amongst other diagnoses). Two rheumatologists had a concordance of 0.89 in 404 joints. The wrist was commonly involved (173/400) followed by the MFPIP (134) and IFPIP (128). The screening CNN achieved excellent accuracy (98%), sensitivity (98%) and specificity (98%) in predicting a patient compared to controls. Joint-specific CNN accuracy, sensitivity and specificity were highest for the wrist (80% , 88% , 72%) followed by the IFPIP (79%, 89% ,73%) and MFPIP (76%, 91%, 70%). Conclusion\nComputer vision without feature engineering can distinguish between patients and controls based on smartphone photographs with good accuracy, showing promise as a screening tool prior to joint-specific CNNs. Future research includes validating findings in diverse populations, refining models to improve specificity in joints and integrating this technology into clinical workflows.","PeriodicalId":501212,"journal":{"name":"medRxiv - Rheumatology","volume":"1 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Incorporating computer vision on smart phone photographs into screening for inflammatory arthritis: results from an Indian patient cohort\",\"authors\":\"Sanat Phatak, Ruchil Saptarshi, Vanshaj Sharma, Rohan Shah, Abhishek Zanwar, Pratiksha Hegde, Somashree Chakraborty, Pranay Goel\",\"doi\":\"10.1101/2024.08.19.24312283\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background: Convolutional neural networks (CNNs) have been used to classify medical images; few studies use smartphone photographs that are scalable at point of care. We previously showed proof of principle that CNNs could detect inflammatory arthritis in three hand joints. We now studied a screening CNN to differentiate from controls. Methods: We studied consecutive patients with early inflammatory arthritis and healthy controls, all examined by a rheumatologist (15% by two). Standardized photographs of the hands were taken using a studio box, anonymized, and cropped around joints. We fine-tuned pre-trained CNN models on our dataset (80% training; 20% test set). We used an Inception-ResNet-v2 backbone CNN modified for two class outputs (Patient vs Control) on uncropped photos. Inception-ResNet-v2 CNNs were trained on cropped photos of Middle finger Proximal Interphalangeal (MFPIP), Index finger PIP (IFPIP) and wrist. We report representative values of accuracy, sensitivity, specificity. Results: We studied 800 hands from 200 controls (mean age 37.8 years) and 200 patients (mean age 49.6 years; 134 with rheumatoid arthritis amongst other diagnoses). Two rheumatologists had a concordance of 0.89 in 404 joints. The wrist was commonly involved (173/400) followed by the MFPIP (134) and IFPIP (128). The screening CNN achieved excellent accuracy (98%), sensitivity (98%) and specificity (98%) in predicting a patient compared to controls. Joint-specific CNN accuracy, sensitivity and specificity were highest for the wrist (80% , 88% , 72%) followed by the IFPIP (79%, 89% ,73%) and MFPIP (76%, 91%, 70%). Conclusion\\nComputer vision without feature engineering can distinguish between patients and controls based on smartphone photographs with good accuracy, showing promise as a screening tool prior to joint-specific CNNs. Future research includes validating findings in diverse populations, refining models to improve specificity in joints and integrating this technology into clinical workflows.\",\"PeriodicalId\":501212,\"journal\":{\"name\":\"medRxiv - Rheumatology\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"medRxiv - Rheumatology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1101/2024.08.19.24312283\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"medRxiv - Rheumatology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1101/2024.08.19.24312283","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Incorporating computer vision on smart phone photographs into screening for inflammatory arthritis: results from an Indian patient cohort
Background: Convolutional neural networks (CNNs) have been used to classify medical images; few studies use smartphone photographs that are scalable at point of care. We previously showed proof of principle that CNNs could detect inflammatory arthritis in three hand joints. We now studied a screening CNN to differentiate from controls. Methods: We studied consecutive patients with early inflammatory arthritis and healthy controls, all examined by a rheumatologist (15% by two). Standardized photographs of the hands were taken using a studio box, anonymized, and cropped around joints. We fine-tuned pre-trained CNN models on our dataset (80% training; 20% test set). We used an Inception-ResNet-v2 backbone CNN modified for two class outputs (Patient vs Control) on uncropped photos. Inception-ResNet-v2 CNNs were trained on cropped photos of Middle finger Proximal Interphalangeal (MFPIP), Index finger PIP (IFPIP) and wrist. We report representative values of accuracy, sensitivity, specificity. Results: We studied 800 hands from 200 controls (mean age 37.8 years) and 200 patients (mean age 49.6 years; 134 with rheumatoid arthritis amongst other diagnoses). Two rheumatologists had a concordance of 0.89 in 404 joints. The wrist was commonly involved (173/400) followed by the MFPIP (134) and IFPIP (128). The screening CNN achieved excellent accuracy (98%), sensitivity (98%) and specificity (98%) in predicting a patient compared to controls. Joint-specific CNN accuracy, sensitivity and specificity were highest for the wrist (80% , 88% , 72%) followed by the IFPIP (79%, 89% ,73%) and MFPIP (76%, 91%, 70%). Conclusion
Computer vision without feature engineering can distinguish between patients and controls based on smartphone photographs with good accuracy, showing promise as a screening tool prior to joint-specific CNNs. Future research includes validating findings in diverse populations, refining models to improve specificity in joints and integrating this technology into clinical workflows.