Pub Date : 2024-02-01DOI: 10.1016/j.imed.2023.02.002
Zhibo Wang , Ruiqing Liu , Shunli Liu , Baoying Sun , Wentao Xie , Dongsheng Wang , Yun Lu
Background
We created and validated a computed tomography (CT)-based radiomic model using both clinical factors and the radiomic signature for assessing the strangulation risk of acute intestinal obstruction. This would assist surgeons in accurately predicting intestinal ischemia and strangulation in patients with intestinal obstruction.
Methods
We recruited 289 patients with acute intestinal obstruction admitted in the Affiliated Hospital of Qingdao University from January 2019 to February 2022. The patients were allocated to a training (n = 226) and validation cohort (n = 63). Radiomic features were collected from CT images, and the radiomic signature was extracted and used to calculate a radiomic score (Rad-score). A nomogram was constructed using the clinical features and the Rad-score, and the performance of the clinical, radiomics, and nomogram models was assessed in the two cohorts.
Results
Six robust features were used to construct the radiomic signature. The nomogram incorporating hemoglobin levels, C-reactive protein levels, American Society of Anesthesiologists score, time of obstruction, CT image of mesenteric fluid (P < 0.05), and the signature demonstrated good predictive ability for intestinal ischemia in patients with acute intestinal obstruction, with areas under the curve of 0.892 (95% confidence interval, 0.837–0.947) and 0.781 (95% confidence interval, 0.619–0.944) for the training and validation sets, respectively. The decision curve analysis showed that this model outperformed the clinical and radiomic signature models.
Conclusion
The radiomic nomogram may effectively predict intestinal ischemia in patients with acute intestinal disease and may assist clinical decision-making.
{"title":"A computed tomography-based radiomic model for the prediction of strangulation risk in patients with acute intestinal obstruction","authors":"Zhibo Wang , Ruiqing Liu , Shunli Liu , Baoying Sun , Wentao Xie , Dongsheng Wang , Yun Lu","doi":"10.1016/j.imed.2023.02.002","DOIUrl":"10.1016/j.imed.2023.02.002","url":null,"abstract":"<div><h3>Background</h3><p>We created and validated a computed tomography (CT)-based radiomic model using both clinical factors and the radiomic signature for assessing the strangulation risk of acute intestinal obstruction. This would assist surgeons in accurately predicting intestinal ischemia and strangulation in patients with intestinal obstruction.</p></div><div><h3>Methods</h3><p>We recruited 289 patients with acute intestinal obstruction admitted in the Affiliated Hospital of Qingdao University from January 2019 to February 2022. The patients were allocated to a training (<em>n</em> = 226) and validation cohort (<em>n</em> = 63). Radiomic features were collected from CT images, and the radiomic signature was extracted and used to calculate a radiomic score (Rad-score). A nomogram was constructed using the clinical features and the Rad-score, and the performance of the clinical, radiomics, and nomogram models was assessed in the two cohorts.</p></div><div><h3>Results</h3><p>Six robust features were used to construct the radiomic signature. The nomogram incorporating hemoglobin levels, C-reactive protein levels, American Society of Anesthesiologists score, time of obstruction, CT image of mesenteric fluid (<em>P</em> < 0.05), and the signature demonstrated good predictive ability for intestinal ischemia in patients with acute intestinal obstruction, with areas under the curve of 0.892 (95% confidence interval, 0.837–0.947) and 0.781 (95% confidence interval, 0.619–0.944) for the training and validation sets, respectively. The decision curve analysis showed that this model outperformed the clinical and radiomic signature models.</p></div><div><h3>Conclusion</h3><p>The radiomic nomogram may effectively predict intestinal ischemia in patients with acute intestinal disease and may assist clinical decision-making.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"4 1","pages":"Pages 33-42"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102623000207/pdfft?md5=36425dbdc71475361c574b3e317d606f&pid=1-s2.0-S2667102623000207-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42830093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.1016/j.imed.2023.01.003
Nurjahan Nipa , Mahmudul Hasan Riyad , Shahriare Satu , Walliullah , Koushik Chandra Howlader , Mohammad Ali Moni
Objective Diabetes mellitus is a serious disease where the body of affected patients are failed to produce enough insulin that causes an abnormality of blood sugar. This disease happens for a number of reasons including modern lifestyle, lethargic attitude, unhealthy food consumption, family history, age, overweight, etc. The aim of this study was to propose a machine learning based prediction model that detected diabetes at the beginning.
Methods In this work, we collected 520 patients records from the University of California, Irvine (UCI) machine learning repository of Sylhet Diabetes Hospital, Sylhet. Then, a similar questionnaire of that hospital was followed and assembled 558 patients records from all over Bangladesh through this questionnaire. However, we accumulated patient records of these two datasets. In the next step, these datasets were cleaned and applied thirty five state-of-arts classifiers such as logistic regression (LR), K nearest neighbors (KNN), support vector classifier (SVC), Nave Byes (NB), decision tree (DT), random forest (RF), stochastic gradient descent (SGD), Perceptron, AdaBoost, XGBoost, passive aggressive classifier (PAC), ridge classifier (RC), Nu-support vector classifier (Nu-SVC), linear support vector classifier (LSVC), calibrated classifier CV (CCCV), nearest centroid (NC), Gaussian process classifier (GPC), multinomial NB (MNB), complement NB, Bernoulli NB (BNB), categorical NB, Bagging, extra tree(ET), gradiant boosting classifier (GBC), Hist gradiant boosting classifier (HGBC), one vs rest classifier (OVsRC), multi-layer perceptron (MLP), label propagation (LP), label spreading (LS), stacking, ridge classifier CV (RCCV), logistic regression CV (LRCV), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and light gradient boosting machine (LGBM) to explore best stable predictive model. The performance of the classifiers has been measured using five metrics such as accuracy, precision, recall, F1-score, and area under the receiver operating characteristic. Finally, these outcomes were interpreted using Shapley additive explanations methods and identified relevant features for happening diabetes.
Results In this work, different classifiers were shown their performance where ET outperformed any other classifiers with 97.11% accuracy for the Sylhet Diabetes Hospital dataset (SDHD) and MLP shows the best accuracy (96.42%) for the collected dataset. Subsequently, HGBC and LGBM provide the highest 94.90% accuracy for the combined datasets individually.
Conclusion LGBM, stacking, HGBC, RF, ET, bagging, and GBC might represent more stable prediction results for each dataset.
{"title":"Clinically adaptable machine learning model to identify early appreciable features of diabetes","authors":"Nurjahan Nipa , Mahmudul Hasan Riyad , Shahriare Satu , Walliullah , Koushik Chandra Howlader , Mohammad Ali Moni","doi":"10.1016/j.imed.2023.01.003","DOIUrl":"10.1016/j.imed.2023.01.003","url":null,"abstract":"<div><p><strong>Objective</strong> Diabetes mellitus is a serious disease where the body of affected patients are failed to produce enough insulin that causes an abnormality of blood sugar. This disease happens for a number of reasons including modern lifestyle, lethargic attitude, unhealthy food consumption, family history, age, overweight, etc. The aim of this study was to propose a machine learning based prediction model that detected diabetes at the beginning.</p><p><strong>Methods</strong> In this work, we collected 520 patients records from the University of California, Irvine (UCI) machine learning repository of Sylhet Diabetes Hospital, Sylhet. Then, a similar questionnaire of that hospital was followed and assembled 558 patients records from all over Bangladesh through this questionnaire. However, we accumulated patient records of these two datasets. In the next step, these datasets were cleaned and applied thirty five state-of-arts classifiers such as logistic regression (LR), K nearest neighbors (KNN), support vector classifier (SVC), Nave Byes (NB), decision tree (DT), random forest (RF), stochastic gradient descent (SGD), Perceptron, AdaBoost, XGBoost, passive aggressive classifier (PAC), ridge classifier (RC), Nu-support vector classifier (Nu-SVC), linear support vector classifier (LSVC), calibrated classifier CV (CCCV), nearest centroid (NC), Gaussian process classifier (GPC), multinomial NB (MNB), complement NB, Bernoulli NB (BNB), categorical NB, Bagging, extra tree(ET), gradiant boosting classifier (GBC), Hist gradiant boosting classifier (HGBC), one vs rest classifier (OVsRC), multi-layer perceptron (MLP), label propagation (LP), label spreading (LS), stacking, ridge classifier CV (RCCV), logistic regression CV (LRCV), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and light gradient boosting machine (LGBM) to explore best stable predictive model. The performance of the classifiers has been measured using five metrics such as accuracy, precision, recall, F1-score, and area under the receiver operating characteristic. Finally, these outcomes were interpreted using Shapley additive explanations methods and identified relevant features for happening diabetes.</p><p><strong>Results</strong> In this work, different classifiers were shown their performance where ET outperformed any other classifiers with 97.11% accuracy for the Sylhet Diabetes Hospital dataset (SDHD) and MLP shows the best accuracy (96.42%) for the collected dataset. Subsequently, HGBC and LGBM provide the highest 94.90% accuracy for the combined datasets individually.</p><p><strong>Conclusion</strong> LGBM, stacking, HGBC, RF, ET, bagging, and GBC might represent more stable prediction results for each dataset.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"4 1","pages":"Pages 22-32"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102623000049/pdfft?md5=483dd192a0f387935882a26dc29741d8&pid=1-s2.0-S2667102623000049-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41832008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-01DOI: 10.1016/j.imed.2023.09.001
Han Lyu , Zhixiang Wang , Jia Li , Jing Sun , Xinghao Wang , Pengling Ren , Linkun Cai , Zhenchang Wang , Max Wintermark
Objective
Appropriate medical imaging is important for value-based care. We aim to evaluate the performance of generative pretrained transformer 4 (GPT-4), an innovative natural language processing model, providing appropriate medical imaging automatically in different clinical scenarios.
Methods
Institutional Review Boards (IRB) approval was not required due to the use of nonidentifiable data. Instead, we used 112 questions from the American College of Radiology (ACR) Radiology-TEACHES Program as prompts, which is an open-sourced question and answer program to guide appropriate medical imaging. We included 69 free-text case vignettes and 43 simplified cases. For the performance evaluation of GPT-4 and GPT-3.5, we considered the recommendations of ACR guidelines as the gold standard, and then three radiologists analyzed the consistency of the responses from the GPT models with those of the ACR. We set a five-score criterion for the evaluation of the consistency. A paired t-test was applied to assess the statistical significance of the findings.
Results
For the performance of the GPT models in free-text case vignettes, the accuracy of GPT-4 was 92.9%, whereas the accuracy of GPT-3.5 was just 78.3%. GPT-4 can provide more appropriate suggestions to reduce the overutilization of medical imaging than GPT-3.5 (t = 3.429, P = 0.001). For the performance of the GPT models in simplified scenarios, the accuracy of GPT-4 and GPT-3.5 was 66.5% and 60.0%, respectively. The differences were not statistically significant (t = 1.858, P = 0.070). GPT-4 was characterized by longer reaction times (27.1 s in average) and extensive responses (137.1 words on average) than GPT-3.5.
Conclusion
As an advanced tool for improving value-based healthcare in clinics, GPT-4 may guide appropriate medical imaging accurately and efficiently.
{"title":"Generative pretrained transformer 4: an innovative approach to facilitate value-based healthcare","authors":"Han Lyu , Zhixiang Wang , Jia Li , Jing Sun , Xinghao Wang , Pengling Ren , Linkun Cai , Zhenchang Wang , Max Wintermark","doi":"10.1016/j.imed.2023.09.001","DOIUrl":"10.1016/j.imed.2023.09.001","url":null,"abstract":"<div><h3>Objective</h3><p>Appropriate medical imaging is important for value-based care. We aim to evaluate the performance of generative pretrained transformer 4 (GPT-4), an innovative natural language processing model, providing appropriate medical imaging automatically in different clinical scenarios.</p></div><div><h3>Methods</h3><p>Institutional Review Boards (IRB) approval was not required due to the use of nonidentifiable data. Instead, we used 112 questions from the American College of Radiology (ACR) Radiology-TEACHES Program as prompts, which is an open-sourced question and answer program to guide appropriate medical imaging. We included 69 free-text case vignettes and 43 simplified cases. For the performance evaluation of GPT-4 and GPT-3.5, we considered the recommendations of ACR guidelines as the gold standard, and then three radiologists analyzed the consistency of the responses from the GPT models with those of the ACR. We set a five-score criterion for the evaluation of the consistency. A paired t-test was applied to assess the statistical significance of the findings.</p></div><div><h3>Results</h3><p>For the performance of the GPT models in free-text case vignettes, the accuracy of GPT-4 was 92.9%, whereas the accuracy of GPT-3.5 was just 78.3%. GPT-4 can provide more appropriate suggestions to reduce the overutilization of medical imaging than GPT-3.5 (<em>t</em> = 3.429, <em>P</em> = 0.001). For the performance of the GPT models in simplified scenarios, the accuracy of GPT-4 and GPT-3.5 was 66.5% and 60.0%, respectively. The differences were not statistically significant (<em>t</em> = 1.858, <em>P</em> = 0.070). GPT-4 was characterized by longer reaction times (27.1 s in average) and extensive responses (137.1 words on average) than GPT-3.5.</p></div><div><h3>Conclusion</h3><p>As an advanced tool for improving value-based healthcare in clinics, GPT-4 may guide appropriate medical imaging accurately and efficiently.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"4 1","pages":"Pages 10-15"},"PeriodicalIF":0.0,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102623000608/pdfft?md5=cecccaadb9c5cc3a9e585509d068faea&pid=1-s2.0-S2667102623000608-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134918276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1016/j.imed.2023.07.001
Yuanhai Tu , Yuanhao Peng , Xinghua Wen , Yuning Wang , Kang Liu , Kai Cheng , Han Yan
<div><h3>Background</h3><p>The trochanter of the femur is a common site for bone tumors. However, locating the specific boundary of bone tumor infiltration and determining the surgical method can be challenging. The objective of this study was to review the diagnosis, treatment, and surgical outcomes of patients with tumors or tumor-like changes in the femoral trochanter after computer-assisted precise tumor resection and hip-preserving reconstruction of the trochanter.</p></div><div><h3>Methods</h3><p>From January 2005 to September 2020, 11 patients with trochanteric tumors (aged: 18–53 years; six males and five females) were treated in Guangzhou First People's Hospital. The cases included aneurysmal bone cyst (<em>n</em> = 1), giant cell tumor of bone (<em>n</em> = 2), fibrous histiocytoma of bone (<em>n</em> = 1), endochondroma (<em>n</em> = 1), and fibrous dysplasia of bone (<em>n</em> = 6). For patients with trochanteric tumors, computed tomography and magnetic resonance imaging scanning were performed before operation to obtain two-dimensional image data of the lesion. A three-dimensional digital model of bilateral lower limbs was reconstructed by computer technology, the boundary of tumor growth was determined by computer simulation, the process of tumor resection and reconstruction was simulated, and the personalized guide template was designed. During the operation, the personalized guide plate guided the precise resection of the tumor, and the allogeneic bone was trimmed to match the shape of the bone defect.</p></div><div><h3>Results</h3><p>All 11 patients underwent accurate resection of the tumor or tumor-like lesion and reconstruction of the hip. In eight cases, the lesion was confined to the trochanter, which was fixed with large segment allogeneic bone, autologous iliac bone, and proximal femoral anatomic plate. In three cases, allogeneic bone, autologous iliac bone, and femoral reconstruction nail were used to fix the tumor under the trochanter. Postoperative X-ray examination showed that the repair and reconstruction of the bone defect was effective, and callus bridging between the allogenic bone and autogenous bone was observed 6 months after operation. All patients recovered their walking function 3–6 months after operation. The duration of the follow-up period ranged from 6 months to 6 years. A patient experienced recurrence of endochondroma; pathological examination revealed chondrocytic sarcoma. The remaining 10 patients were treated with segmental resection and reconstruction. The operation time ranged 2.5–4.5 h (average: 3.2 h). Intraoperative blood loss ranged from 300 to 500 ml (average: 368 ml). The local recurrence rate was 9.1%, and the overall survival rate was 100%. The average Musculoskeletal Tumor Society score was 27 (excellent and good for eight and three patients, respectively).</p></div><div><h3>Conclusions</h3><p>Three-dimensional computer skeleton modeling and simulation-assisted resection and reconstruction
{"title":"Three-dimensional digital technology-assisted precise tumor resection and reconstruction of the femoral trochanter and postoperative functional recovery: a retrospective study","authors":"Yuanhai Tu , Yuanhao Peng , Xinghua Wen , Yuning Wang , Kang Liu , Kai Cheng , Han Yan","doi":"10.1016/j.imed.2023.07.001","DOIUrl":"10.1016/j.imed.2023.07.001","url":null,"abstract":"<div><h3>Background</h3><p>The trochanter of the femur is a common site for bone tumors. However, locating the specific boundary of bone tumor infiltration and determining the surgical method can be challenging. The objective of this study was to review the diagnosis, treatment, and surgical outcomes of patients with tumors or tumor-like changes in the femoral trochanter after computer-assisted precise tumor resection and hip-preserving reconstruction of the trochanter.</p></div><div><h3>Methods</h3><p>From January 2005 to September 2020, 11 patients with trochanteric tumors (aged: 18–53 years; six males and five females) were treated in Guangzhou First People's Hospital. The cases included aneurysmal bone cyst (<em>n</em> = 1), giant cell tumor of bone (<em>n</em> = 2), fibrous histiocytoma of bone (<em>n</em> = 1), endochondroma (<em>n</em> = 1), and fibrous dysplasia of bone (<em>n</em> = 6). For patients with trochanteric tumors, computed tomography and magnetic resonance imaging scanning were performed before operation to obtain two-dimensional image data of the lesion. A three-dimensional digital model of bilateral lower limbs was reconstructed by computer technology, the boundary of tumor growth was determined by computer simulation, the process of tumor resection and reconstruction was simulated, and the personalized guide template was designed. During the operation, the personalized guide plate guided the precise resection of the tumor, and the allogeneic bone was trimmed to match the shape of the bone defect.</p></div><div><h3>Results</h3><p>All 11 patients underwent accurate resection of the tumor or tumor-like lesion and reconstruction of the hip. In eight cases, the lesion was confined to the trochanter, which was fixed with large segment allogeneic bone, autologous iliac bone, and proximal femoral anatomic plate. In three cases, allogeneic bone, autologous iliac bone, and femoral reconstruction nail were used to fix the tumor under the trochanter. Postoperative X-ray examination showed that the repair and reconstruction of the bone defect was effective, and callus bridging between the allogenic bone and autogenous bone was observed 6 months after operation. All patients recovered their walking function 3–6 months after operation. The duration of the follow-up period ranged from 6 months to 6 years. A patient experienced recurrence of endochondroma; pathological examination revealed chondrocytic sarcoma. The remaining 10 patients were treated with segmental resection and reconstruction. The operation time ranged 2.5–4.5 h (average: 3.2 h). Intraoperative blood loss ranged from 300 to 500 ml (average: 368 ml). The local recurrence rate was 9.1%, and the overall survival rate was 100%. The average Musculoskeletal Tumor Society score was 27 (excellent and good for eight and three patients, respectively).</p></div><div><h3>Conclusions</h3><p>Three-dimensional computer skeleton modeling and simulation-assisted resection and reconstruction","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"3 4","pages":"Pages 235-242"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102623000591/pdfft?md5=823f8bac540f5ea82e1f34eb473e406a&pid=1-s2.0-S2667102623000591-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135434127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1016/j.imed.2023.01.004
Leandro Muniz de Lima , Maria Clara Falcão Ribeiro de Assis , Júlia Pessini Soares , Tânia Regina Grão-Velloso , Liliana Aparecida Pimenta de Barros , Danielle Resende Camisasca , Renato Antonio Krohling
Background Oral cancer is one of the most common types of cancer in men causing mortality if not diagnosed early. In recent years, computer-aided diagnosis (CAD) using artificial intelligence techniques, in particular, deep neural networks have been investigated and several approaches have been proposed to deal with the automated detection of various pathologies using digital images. Recent studies indicate that the fusion of images with the patient’s clinical information is important for the final clinical diagnosis. As such dataset does not yet exist for oral cancer, as far as the authors are aware, a new dataset was collected consisting of histopathological images, demographic and clinical data. This study evaluated the importance of complementary data to histopathological image analysis of oral leukoplakia and carcinoma for CAD.
Methods A new dataset (NDB-UFES) was collected from 2011 to 2021 consisting of histopathological images and information. The 237 samples were curated and analyzed by oral pathologists generating the gold standard for classification. State-of-the-art image fusion architectures and complementary data (Concatenation, Mutual Attention, MetaBlock and MetaNet) using the latest deep learning backbones were investigated for 4 distinct tasks to identify oral squamous cell carcinoma, leukoplakia with dysplasia and leukoplakia without dysplasia. We evaluate them using balanced accuracy, precision, recall and area under the ROC curve metrics.
Results Experimental results indicate that the best models present balanced accuracy of using images, demographic and clinical information with MetaBlock fusion and ResNetV2 backbone. It represents an improvement in performance of (19.54 pp) in the task to differentiate samples diagnosed with oral squamous cell carcinoma and leukoplakia with or without dysplasia.
Conclusion This study indicates that cured demographic and clinical data may positively influence the performance of artificial intelligence models in automated classification of oral cancer.
{"title":"Importance of complementary data to histopathological image analysis of oral leukoplakia and carcinoma using deep neural networks","authors":"Leandro Muniz de Lima , Maria Clara Falcão Ribeiro de Assis , Júlia Pessini Soares , Tânia Regina Grão-Velloso , Liliana Aparecida Pimenta de Barros , Danielle Resende Camisasca , Renato Antonio Krohling","doi":"10.1016/j.imed.2023.01.004","DOIUrl":"10.1016/j.imed.2023.01.004","url":null,"abstract":"<div><p><strong>Background</strong> Oral cancer is one of the most common types of cancer in men causing mortality if not diagnosed early. In recent years, computer-aided diagnosis (CAD) using artificial intelligence techniques, in particular, deep neural networks have been investigated and several approaches have been proposed to deal with the automated detection of various pathologies using digital images. Recent studies indicate that the fusion of images with the patient’s clinical information is important for the final clinical diagnosis. As such dataset does not yet exist for oral cancer, as far as the authors are aware, a new dataset was collected consisting of histopathological images, demographic and clinical data. This study evaluated the importance of complementary data to histopathological image analysis of oral leukoplakia and carcinoma for CAD.</p><p><strong>Methods</strong> A new dataset (NDB-UFES) was collected from 2011 to 2021 consisting of histopathological images and information. The 237 samples were curated and analyzed by oral pathologists generating the gold standard for classification. State-of-the-art image fusion architectures and complementary data (Concatenation, Mutual Attention, MetaBlock and MetaNet) using the latest deep learning backbones were investigated for 4 distinct tasks to identify oral squamous cell carcinoma, leukoplakia with dysplasia and leukoplakia without dysplasia. We evaluate them using balanced accuracy, precision, recall and area under the ROC curve metrics.</p><p><strong>Results</strong> Experimental results indicate that the best models present balanced accuracy of <span><math><mrow><mn>83.24</mn><mo>%</mo></mrow></math></span> using images, demographic and clinical information with MetaBlock fusion and ResNetV2 backbone. It represents an improvement in performance of <span><math><mrow><mn>30.68</mn><mo>%</mo></mrow></math></span> (19.54 pp) in the task to differentiate samples diagnosed with oral squamous cell carcinoma and leukoplakia with or without dysplasia.</p><p><strong>Conclusion</strong> This study indicates that cured demographic and clinical data may positively influence the performance of artificial intelligence models in automated classification of oral cancer.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"3 4","pages":"Pages 258-266"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102623000050/pdfft?md5=b59c19334b06233e59d12c85577832ac&pid=1-s2.0-S2667102623000050-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47617602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1016/j.imed.2022.10.006
Shahnewaz Ali, Ross Crawford, Ajay K. Pandey
<div><h3>Background</h3><p>Knee arthroscopy is one of the most complex minimally invasive surgeries, and it is routinely performed to treat a range of ailments and injuries to the knee joint. Its complex ergonomic design imposes visualization and navigation constraints, consequently leading to unintended tissue damage and a steep learning curve before surgeons gain proficiency. The lack of robust visual texture and landmark frame features further limits the success of image-guided approaches to knee arthroscopy Feature- and texture-less tissue structures of knee anatomy, lighting conditions, noise, blur, debris, lack of accurate ground-truth label, tissue degeneration, and injury make semantic segmentation an extremely challenging task. To address this complex research problem, this study reported the utility of reconstructed surface reflectance as a viable piece of information that could be used with cutting-edge deep learning technique to achieve highly accurate segmented scenes.</p></div><div><h3>Methods</h3><p>We proposed an intraoperative, two-tier deep learning method that makes full use of tissue reflectance information present within an RGB frame to segment texture-less images into multiple tissue types from knee arthroscopy video frames. This study included several cadaver knees experiments at the Medical and Engineering Research Facility, located within the Prince Charles Hospital campus, Brisbane Queensland. Data were collected from a total of five cadaver knees, three were males and one from a female. The age range of the donors was 56–93 years. Aging-related tissue degeneration and some anterior cruciate ligament injury were observed in most cadaver knees. An arthroscopic image dataset was created and subsequently labeled by clinical experts. This study also included validation of a prototype stereo arthroscope, along with conventional arthroscope, to attain larger field of view and stereo vision. We reconstructed surface reflectance from camera responses that exhibited distinct spatial features at different wavelengths ranging from 380 to 730 nm in the RGB spectrum. Toward the aim to segment texture-less tissue types, this data was used within a two-stage deep learning model.</p></div><div><h3>Results</h3><p>The accuracy of the network was measured using dice coefficient score. The average segmentation accuracy for the tissue-type articular cruciate ligament (ACL) was 0.6625, for the tissue-type bone was 0.84, and for the tissue-type meniscus was 0.565. For the analysis, we excluded extremely poor quality of frames. Here, a frame is considered extremely poor quality when more than 50% of any tissue structures are over- or underexposed due to nonuniform light exposure. Additionally, when only high quality of frames was considered during the training and validation stage, the average bone segmentation accuracy improved to 0.92 and the average ACL segmentation accuracy reached 0.73. These two tissue types, namely, femur bone and ACL, h
{"title":"Arthroscopic scene segmentation using multispectral reconstructed frames and deep learning","authors":"Shahnewaz Ali, Ross Crawford, Ajay K. Pandey","doi":"10.1016/j.imed.2022.10.006","DOIUrl":"10.1016/j.imed.2022.10.006","url":null,"abstract":"<div><h3>Background</h3><p>Knee arthroscopy is one of the most complex minimally invasive surgeries, and it is routinely performed to treat a range of ailments and injuries to the knee joint. Its complex ergonomic design imposes visualization and navigation constraints, consequently leading to unintended tissue damage and a steep learning curve before surgeons gain proficiency. The lack of robust visual texture and landmark frame features further limits the success of image-guided approaches to knee arthroscopy Feature- and texture-less tissue structures of knee anatomy, lighting conditions, noise, blur, debris, lack of accurate ground-truth label, tissue degeneration, and injury make semantic segmentation an extremely challenging task. To address this complex research problem, this study reported the utility of reconstructed surface reflectance as a viable piece of information that could be used with cutting-edge deep learning technique to achieve highly accurate segmented scenes.</p></div><div><h3>Methods</h3><p>We proposed an intraoperative, two-tier deep learning method that makes full use of tissue reflectance information present within an RGB frame to segment texture-less images into multiple tissue types from knee arthroscopy video frames. This study included several cadaver knees experiments at the Medical and Engineering Research Facility, located within the Prince Charles Hospital campus, Brisbane Queensland. Data were collected from a total of five cadaver knees, three were males and one from a female. The age range of the donors was 56–93 years. Aging-related tissue degeneration and some anterior cruciate ligament injury were observed in most cadaver knees. An arthroscopic image dataset was created and subsequently labeled by clinical experts. This study also included validation of a prototype stereo arthroscope, along with conventional arthroscope, to attain larger field of view and stereo vision. We reconstructed surface reflectance from camera responses that exhibited distinct spatial features at different wavelengths ranging from 380 to 730 nm in the RGB spectrum. Toward the aim to segment texture-less tissue types, this data was used within a two-stage deep learning model.</p></div><div><h3>Results</h3><p>The accuracy of the network was measured using dice coefficient score. The average segmentation accuracy for the tissue-type articular cruciate ligament (ACL) was 0.6625, for the tissue-type bone was 0.84, and for the tissue-type meniscus was 0.565. For the analysis, we excluded extremely poor quality of frames. Here, a frame is considered extremely poor quality when more than 50% of any tissue structures are over- or underexposed due to nonuniform light exposure. Additionally, when only high quality of frames was considered during the training and validation stage, the average bone segmentation accuracy improved to 0.92 and the average ACL segmentation accuracy reached 0.73. These two tissue types, namely, femur bone and ACL, h","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"3 4","pages":"Pages 243-251"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102623000013/pdfft?md5=1642345886f548549679920e28e75b90&pid=1-s2.0-S2667102623000013-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41900323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To analyze the characteristics of tongue imaging color parameters in patients treated with percutaneous coronary intervention (PCI) and non-PCI for coronary atherosclerotic heart disease (CHD), and to observe the effects of PCI on the tongue images of patients as a basis for the clinical diagnosis and treatment of patients with CHD.
Methods
This study used a retrospective cross-sectional survey to analyze tongue photographs and medical history information from 204 patients with CHD between November 2018 and July 2020. Tongue images of each subject were obtained using the Z-BOX Series traditional Chinese medicine (TCM) intelligent diagnosis instruments, the SMX System 2.0 was used to transform the image data into parameters in the HSV color space, and finally the parameters of the tongue image between patients in the PCI-treated and non-PCI-treated groups for CHD were analyzed.
Results
Among the 204 patients, 112 were in the non-PCI treatment group (38 men and 74 women; average age of (68.76 ± 9.49) years), 92 were in the PCI treatment group (66 men and 26 women; average age of (66.02 ± 10.22) years). In the PCI treatment group, the H values of the middle and tip of the tongue and the overall coating of the tongue were lower (P < 0.05), while the V values of the middle, tip, both sides of the tongue, the whole tongue and the overall coating of the tongue were higher (P < 0.05).
Conclusion
The color parameters of the tongue image could reflect the physical state of patients treated with PCI, which may provide a basis for the clinical diagnosis and treatment of patients with CHD.
{"title":"Tongue diagnosis based on hue-saturation value color space: controlled study of tongue appearance in patients treated with percutaneous coronary intervention for coronary heart disease","authors":"Yumo Xia, Qingsheng Wang, Xiao Feng, Xin'ang Xiao, Yiqin Wang, Zhaoxia Xu","doi":"10.1016/j.imed.2022.09.002","DOIUrl":"10.1016/j.imed.2022.09.002","url":null,"abstract":"<div><h3>Objective</h3><p>To analyze the characteristics of tongue imaging color parameters in patients treated with percutaneous coronary intervention (PCI) and non-PCI for coronary atherosclerotic heart disease (CHD), and to observe the effects of PCI on the tongue images of patients as a basis for the clinical diagnosis and treatment of patients with CHD.</p></div><div><h3>Methods</h3><p>This study used a retrospective cross-sectional survey to analyze tongue photographs and medical history information from 204 patients with CHD between November 2018 and July 2020. Tongue images of each subject were obtained using the Z-BOX Series traditional Chinese medicine (TCM) intelligent diagnosis instruments, the SMX System 2.0 was used to transform the image data into parameters in the HSV color space, and finally the parameters of the tongue image between patients in the PCI-treated and non-PCI-treated groups for CHD were analyzed.</p></div><div><h3>Results</h3><p>Among the 204 patients, 112 were in the non-PCI treatment group (38 men and 74 women; average age of (68.76 ± 9.49) years), 92 were in the PCI treatment group (66 men and 26 women; average age of (66.02 ± 10.22) years). In the PCI treatment group, the H values of the middle and tip of the tongue and the overall coating of the tongue were lower (<em>P</em> < 0.05), while the V values of the middle, tip, both sides of the tongue, the whole tongue and the overall coating of the tongue were higher (<em>P</em> < 0.05).</p></div><div><h3>Conclusion</h3><p>The color parameters of the tongue image could reflect the physical state of patients treated with PCI, which may provide a basis for the clinical diagnosis and treatment of patients with CHD.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"3 4","pages":"Pages 252-257"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266710262200095X/pdfft?md5=da07f63f716d609034e24ed5ef5c3c7e&pid=1-s2.0-S266710262200095X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42954180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1016/j.imed.2023.01.005
Mohammad Karimi Moridani , Seyed Kamaledin Setarehdan , Ali Motie Nasrabadi , Esmaeil Hajinasrollah
<div><h3>Objective</h3><p>This study aimed to explore the mortality prediction of patients with cerebrovascular diseases in the intensive care unit (ICU) by examining the important signals during different periods of admission in the ICU, which is considered one of the new topics in the medical field. Several approaches have been proposed for prediction in this area. Each of these methods has been able to predict mortality somewhat, but many of these techniques require recording a large amount of data from the patients, where recording all data is not possible in most cases; at the same time, this study focused only on heart rate variability (HRV) and systolic and diastolic blood pressure.</p></div><div><h3>Methods</h3><p>The ICU data used for the challenge were extracted from the Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) Clinical Database. The proposed algorithm was evaluated using data from 88 cerebrovascular ICU patients, 48 men and 40 women, during their first 48 hours of ICU stay. The electrocardiogram (ECG) signals are related to lead II, and the sampling frequency is 125 Hz. The time of admission and time of death are labeled in all data. In this study, the mortality prediction in patients with cerebral ischemia is evaluated using the features extracted from the return map generated by the signal of HRV and blood pressure. To predict the patient's future condition, the combination of features extracted from the return mapping generated by the HRV signal, such as angle (<em>α</em>), area (<em>A</em>), and various parameters generated by systolic and diastolic blood pressure, including <span><math><mrow><mtext>DB</mtext><msub><mi>P</mi><mrow><mtext>Max</mtext><mo>−</mo><mtext>Min</mtext></mrow></msub></mrow></math></span> <span><math><mrow><mtext>SB</mtext><msub><mi>P</mi><mtext>SD</mtext></msub></mrow></math></span> have been used. Also, to select the best feature combination, the genetic algorithm (GA) and mutual information (MI) methods were used. Paired sample t-test statistical analysis was used to compare the results of two episodes (death and non-death episodes). The <em>P</em>-value for detecting the significance level was considered less than 0.005.</p></div><div><h3>Results</h3><p>The results indicate that the new approach presented in this paper can be compared with other methods or leads to better results. The best combination of features based on GA to achieve maximum predictive accuracy was <em>m</em> (mean), <span><math><msub><mi>L</mi><mtext>Mean</mtext></msub></math></span>, A, SBP<sub>SVMax</sub>, DBP<sub>Max</sub><sub>-</sub><em><sub>Min</sub></em>. The accuracy, specificity, and sensitivity based on the best features obtained from GA were 97.7%, 98.9%, and 95.4% for cerebral ischemia disease with a prediction horizon of 0.5–1 hour before death. The d-factor for the best feature combination based on the GA model is less than 1 (d-factor = 0.95). Also, the bracketed by 95 percent prediction uncer
本研究旨在通过研究重症监护病房(ICU)患者入院不同时期的重要信号,探索重症监护病房(ICU)脑血管疾病患者的死亡率预测,这被认为是医学领域的新课题之一。在这一领域,已经提出了几种预测方法。这些方法中的每一种都能在一定程度上预测死亡率,但其中许多技术都需要记录患者的大量数据,而在大多数情况下不可能记录所有数据;同时,本研究只关注心率变异性(HRV)以及收缩压和舒张压。使用 88 名脑血管重症监护室患者(48 名男性和 40 名女性)在重症监护室住院 48 小时内的数据对所提出的算法进行了评估。心电图(ECG)信号与第二导联有关,采样频率为 125 Hz。所有数据都标注了入院时间和死亡时间。本研究利用从心率变异和血压信号生成的返回图中提取的特征,对脑缺血患者的死亡率预测进行评估。为了预测患者的未来状况,使用了从心率变异信号生成的回波图中提取的特征组合,如角度(α)、面积(A)以及由收缩压和舒张压生成的各种参数,包括 DBPMax-Min SBPSD。此外,为了选择最佳特征组合,还使用了遗传算法(GA)和互信息(MI)方法。采用配对样本 t 检验统计分析来比较两个事件(死亡和非死亡事件)的结果。结果表明,本文提出的新方法可与其他方法相媲美,或取得更好的结果。基于 GA 的最佳特征组合为 m(平均值)、LMean、A、SBPSVMax、DBPMax-Min,从而获得了最高预测准确率。在死亡前 0.5-1 小时的预测范围内,基于 GA 获得的最佳特征对脑缺血疾病的准确性、特异性和灵敏度分别为 97.7%、98.9% 和 95.4%。基于 GA 模型的最佳特征组合的 d 因子小于 1(d 因子 = 0.95)。结论结合心率变异和血压信号可提高死亡事件预测的准确性,缩短脑血管疾病患者确定未来状态的最短住院时间。
{"title":"A predictive model of death from cerebrovascular diseases in intensive care units","authors":"Mohammad Karimi Moridani , Seyed Kamaledin Setarehdan , Ali Motie Nasrabadi , Esmaeil Hajinasrollah","doi":"10.1016/j.imed.2023.01.005","DOIUrl":"10.1016/j.imed.2023.01.005","url":null,"abstract":"<div><h3>Objective</h3><p>This study aimed to explore the mortality prediction of patients with cerebrovascular diseases in the intensive care unit (ICU) by examining the important signals during different periods of admission in the ICU, which is considered one of the new topics in the medical field. Several approaches have been proposed for prediction in this area. Each of these methods has been able to predict mortality somewhat, but many of these techniques require recording a large amount of data from the patients, where recording all data is not possible in most cases; at the same time, this study focused only on heart rate variability (HRV) and systolic and diastolic blood pressure.</p></div><div><h3>Methods</h3><p>The ICU data used for the challenge were extracted from the Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) Clinical Database. The proposed algorithm was evaluated using data from 88 cerebrovascular ICU patients, 48 men and 40 women, during their first 48 hours of ICU stay. The electrocardiogram (ECG) signals are related to lead II, and the sampling frequency is 125 Hz. The time of admission and time of death are labeled in all data. In this study, the mortality prediction in patients with cerebral ischemia is evaluated using the features extracted from the return map generated by the signal of HRV and blood pressure. To predict the patient's future condition, the combination of features extracted from the return mapping generated by the HRV signal, such as angle (<em>α</em>), area (<em>A</em>), and various parameters generated by systolic and diastolic blood pressure, including <span><math><mrow><mtext>DB</mtext><msub><mi>P</mi><mrow><mtext>Max</mtext><mo>−</mo><mtext>Min</mtext></mrow></msub></mrow></math></span> <span><math><mrow><mtext>SB</mtext><msub><mi>P</mi><mtext>SD</mtext></msub></mrow></math></span> have been used. Also, to select the best feature combination, the genetic algorithm (GA) and mutual information (MI) methods were used. Paired sample t-test statistical analysis was used to compare the results of two episodes (death and non-death episodes). The <em>P</em>-value for detecting the significance level was considered less than 0.005.</p></div><div><h3>Results</h3><p>The results indicate that the new approach presented in this paper can be compared with other methods or leads to better results. The best combination of features based on GA to achieve maximum predictive accuracy was <em>m</em> (mean), <span><math><msub><mi>L</mi><mtext>Mean</mtext></msub></math></span>, A, SBP<sub>SVMax</sub>, DBP<sub>Max</sub><sub>-</sub><em><sub>Min</sub></em>. The accuracy, specificity, and sensitivity based on the best features obtained from GA were 97.7%, 98.9%, and 95.4% for cerebral ischemia disease with a prediction horizon of 0.5–1 hour before death. The d-factor for the best feature combination based on the GA model is less than 1 (d-factor = 0.95). Also, the bracketed by 95 percent prediction uncer","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"3 4","pages":"Pages 267-279"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2667102623000062/pdfft?md5=7eb71263be1ed7daa316c412aa96ec35&pid=1-s2.0-S2667102623000062-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44853742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}