Pub Date : 2024-07-16DOI: 10.3991/ijoe.v20i10.49543
Petar Molchovski, K. Tokmakova, D. Tokmakov
Orthopedics and traumatology are clinical specialties that require continuous learning and skill enhancement. Traditional teaching methods may not always be sufficient to meet the needs of contemporary learners. This study aims to compare the effectiveness of microlearning as an additional tool in orthopedics and traumatology university courses alongside traditional teaching methods. The study concluded that microlearning significantly improved students’ knowledge retention, practical skills, and overall performance compared to traditional teaching methods alone. The findings suggest that integrating microlearning into orthopedics and traumatology curricula can improve student learning outcomes and better prepare them for real-world practice.
{"title":"Effectiveness of Microlearning as an Additional Teaching Instrument in Orthopaedics and Traumatology University Course","authors":"Petar Molchovski, K. Tokmakova, D. Tokmakov","doi":"10.3991/ijoe.v20i10.49543","DOIUrl":"https://doi.org/10.3991/ijoe.v20i10.49543","url":null,"abstract":"Orthopedics and traumatology are clinical specialties that require continuous learning and skill enhancement. Traditional teaching methods may not always be sufficient to meet the needs of contemporary learners. This study aims to compare the effectiveness of microlearning as an additional tool in orthopedics and traumatology university courses alongside traditional teaching methods. The study concluded that microlearning significantly improved students’ knowledge retention, practical skills, and overall performance compared to traditional teaching methods alone. The findings suggest that integrating microlearning into orthopedics and traumatology curricula can improve student learning outcomes and better prepare them for real-world practice.","PeriodicalId":507997,"journal":{"name":"International Journal of Online and Biomedical Engineering (iJOE)","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141640211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.3991/ijoe.v20i10.48031
Mardhiah Masril, N. Jalinus, Ridwan, Ambiyar, Sukardi, Dedy Irfan
This study’s objective was to create a hybrid learning-integrated remote laboratory model with validity and practicality. This model has four learning spaces, namely live synchronous, virtual synchronous, self-paced asynchronous, and collaborative asynchronous, so it can support flexible learning. Besides that, this learning model is also based on cognitivism, connectivism, constructivism, behaviourism learning theories and Bloom’s digital taxonomy. The hybrid learning integrated remote laboratory model consists of six syntaxes: 1) issue; 2) investigation; 3) team discussion to solve problems; 4) experiment using a remote laboratory; 5) analysis and evaluation; and 6) explore new solutions. Focus group discussions (FGD) were used to collect high-quality data by seven experts in learning models, vocational education, language and technology. The hybrid learning-integrated remote laboratory model quality analysis used Aiken’s V. The result showed that the hybrid learning integratedremote laboratory model content is valid, with a validity value of 0.87. The practicality analysis result showed that the average percentage of the assessments from lecturers and students was 88.16%, so it can be concluded that it has a high validity value and is very practical.
{"title":"A Flexible Practicum Model on Education: Hybrid Learning Integrated Remote Laboratory Activity Design","authors":"Mardhiah Masril, N. Jalinus, Ridwan, Ambiyar, Sukardi, Dedy Irfan","doi":"10.3991/ijoe.v20i10.48031","DOIUrl":"https://doi.org/10.3991/ijoe.v20i10.48031","url":null,"abstract":"This study’s objective was to create a hybrid learning-integrated remote laboratory model with validity and practicality. This model has four learning spaces, namely live synchronous, virtual synchronous, self-paced asynchronous, and collaborative asynchronous, so it can support flexible learning. Besides that, this learning model is also based on cognitivism, connectivism, constructivism, behaviourism learning theories and Bloom’s digital taxonomy. The hybrid learning integrated remote laboratory model consists of six syntaxes: 1) issue; 2) investigation; 3) team discussion to solve problems; 4) experiment using a remote laboratory; 5) analysis and evaluation; and 6) explore new solutions. Focus group discussions (FGD) were used to collect high-quality data by seven experts in learning models, vocational education, language and technology. The hybrid learning-integrated remote laboratory model quality analysis used Aiken’s V. The result showed that the hybrid learning integratedremote laboratory model content is valid, with a validity value of 0.87. The practicality analysis result showed that the average percentage of the assessments from lecturers and students was 88.16%, so it can be concluded that it has a high validity value and is very practical.","PeriodicalId":507997,"journal":{"name":"International Journal of Online and Biomedical Engineering (iJOE)","volume":"1 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141642334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.3991/ijoe.v20i10.49585
Thinira Wanasinghe, Sakuni Bandara, Supun Madusanka, D. Meedeniya, M. Bandara, Isabel De la Torre Díez
Integrating artificial intelligence (AI) into lung sound classification has markedly improved respiratory disease diagnosis by analysing intricate patterns within audio data. This study is driven by the widespread issue of lung diseases, which affect around 500 million people globally. Early detection of respiratory diseases is crucial for delivering timely and effective treatment. Our study consists of a comprehensive survey of lung sound classification methodologies, exploring the advancements made in leveraging AI to identify and classify respiratory diseases. This survey thoroughly investigates lung sound classification models, along with data augmentation, feature extraction, explainable techniques and support tools to improve systems for diagnosing respiratory conditions. Our goal is to provide meaningful insights for healthcare professionals, researchers and technologists who are dedicated to developing methodologies for the early detection of pulmonary diseases. The paper provides a summary of the current status of lung sound classification research, highlighting both advancements and challenges in the use of AI for more accurate and efficient diagnostic methods in respiratory healthcare.
{"title":"Lung Sound Classification for Respiratory Disease Identification Using Deep Learning: A Survey","authors":"Thinira Wanasinghe, Sakuni Bandara, Supun Madusanka, D. Meedeniya, M. Bandara, Isabel De la Torre Díez","doi":"10.3991/ijoe.v20i10.49585","DOIUrl":"https://doi.org/10.3991/ijoe.v20i10.49585","url":null,"abstract":"Integrating artificial intelligence (AI) into lung sound classification has markedly improved respiratory disease diagnosis by analysing intricate patterns within audio data. This study is driven by the widespread issue of lung diseases, which affect around 500 million people globally. Early detection of respiratory diseases is crucial for delivering timely and effective treatment. Our study consists of a comprehensive survey of lung sound classification methodologies, exploring the advancements made in leveraging AI to identify and classify respiratory diseases. This survey thoroughly investigates lung sound classification models, along with data augmentation, feature extraction, explainable techniques and support tools to improve systems for diagnosing respiratory conditions. Our goal is to provide meaningful insights for healthcare professionals, researchers and technologists who are dedicated to developing methodologies for the early detection of pulmonary diseases. The paper provides a summary of the current status of lung sound classification research, highlighting both advancements and challenges in the use of AI for more accurate and efficient diagnostic methods in respiratory healthcare.","PeriodicalId":507997,"journal":{"name":"International Journal of Online and Biomedical Engineering (iJOE)","volume":"1 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141642482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.3991/ijoe.v20i10.48331
Guillermo Moreno, Abdigal Camargo, Luis Ayala, Mirko Zimic, C. del Carpio
Anemia is a common problem that affects a significant part of the world’s population, especially in impoverished countries. This work aims to improve the accessibility of remote diagnostic tools for underserved populations. Our proposal involves implementing algorithms to estimate hemoglobin levels using images of the eyelid conjunctiva and a calibration label captured with a mid-range cell phone. We propose three algorithms: one for calibration label segmentation, another for palpebral conjunctiva segmentation, and the last one for estimating hemoglobin levels based on the segmented images from the previous algorithms. Experiments were performed using a data set of children’s eyelid images and calibration stickers. An L1 norm error of 0.72 g/dL was achieved using the SLIC-GAT model to estimate the hemoglobin level. In conclusion, the integration of these segmentation and regression methods improved the estimation accuracy compared to current approaches, considering that the source of the images was a mid-range commercial camera. The proposed method has the potential for mass screening in low-income rural populations as it is non-invasive, and its simplicity makes it feasible for community health workers with basic training to perform the test. Therefore, this tool could contribute significantly to efforts aimed at combating childhood anemia.
{"title":"An Algorithm for the Estimation of Hemoglobin Level from Digital Images of Palpebral Conjunctiva Based in Digital Image Processing and Artificial Intelligence","authors":"Guillermo Moreno, Abdigal Camargo, Luis Ayala, Mirko Zimic, C. del Carpio","doi":"10.3991/ijoe.v20i10.48331","DOIUrl":"https://doi.org/10.3991/ijoe.v20i10.48331","url":null,"abstract":"Anemia is a common problem that affects a significant part of the world’s population, especially in impoverished countries. This work aims to improve the accessibility of remote diagnostic tools for underserved populations. Our proposal involves implementing algorithms to estimate hemoglobin levels using images of the eyelid conjunctiva and a calibration label captured with a mid-range cell phone. We propose three algorithms: one for calibration label segmentation, another for palpebral conjunctiva segmentation, and the last one for estimating hemoglobin levels based on the segmented images from the previous algorithms. Experiments were performed using a data set of children’s eyelid images and calibration stickers. An L1 norm error of 0.72 g/dL was achieved using the SLIC-GAT model to estimate the hemoglobin level. In conclusion, the integration of these segmentation and regression methods improved the estimation accuracy compared to current approaches, considering that the source of the images was a mid-range commercial camera. The proposed method has the potential for mass screening in low-income rural populations as it is non-invasive, and its simplicity makes it feasible for community health workers with basic training to perform the test. Therefore, this tool could contribute significantly to efforts aimed at combating childhood anemia.","PeriodicalId":507997,"journal":{"name":"International Journal of Online and Biomedical Engineering (iJOE)","volume":"20 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141641666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.3991/ijoe.v20i10.47157
Vandana Khobragade, Jagannath H. Nirmal, Aayesha Hakim
In this age of digital microscopy, image processing, statistical analysis, categorization, and systems for decision-making have become essential tools for medical diagnostics research. By visualizing and analyzing images, clinicians can identify anomalies in intracellular structure. Leukemia is a cancerous condition marked by an unregulated increase in aberrant white blood cells (WBCs). Recognizing acute leukemia tumor cells in blood smear images (BSI) is a challenging assignment. Image segmentation is regarded as the most significant step in the automated identification of this disease. The innovative concavity-based segmentation algorithm is employed in this study to segment WBC in sub-images from the ALLIDB2 database. The concave endpoints and elliptical features are used in the segmentation step of convex-shaped cell images. The procedure involves the extraction of contour evidence, which detects the visible section of each object, and contour estimation, which corresponds to the final object’s contours. Following the identification of the cells and their internal structure by concavity-based segmentation, the cells are categorized based on their morphological and statistical features. The method was evaluated using a public dataset meant to test classification and segmentation approaches. The statistical tool SPSS is used to independently check the significance of derived features. For classification, significant features are passed into machine learning techniques such as support vector machines (SVM), k-nearest neighbor (KNN), neural networks (NN), decision trees (DT), and Nave Bayes (NB). With an AUC of 98.9% and a total accuracy of 95%, the neural network model performed better. We advocate using the neural network model to identify acute leukemia cells based on its accuracy.
{"title":"Statistical Analysis of Features for Detecting Leukemia","authors":"Vandana Khobragade, Jagannath H. Nirmal, Aayesha Hakim","doi":"10.3991/ijoe.v20i10.47157","DOIUrl":"https://doi.org/10.3991/ijoe.v20i10.47157","url":null,"abstract":"In this age of digital microscopy, image processing, statistical analysis, categorization, and systems for decision-making have become essential tools for medical diagnostics research. By visualizing and analyzing images, clinicians can identify anomalies in intracellular structure. Leukemia is a cancerous condition marked by an unregulated increase in aberrant white blood cells (WBCs). Recognizing acute leukemia tumor cells in blood smear images (BSI) is a challenging assignment. Image segmentation is regarded as the most significant step in the automated identification of this disease. The innovative concavity-based segmentation algorithm is employed in this study to segment WBC in sub-images from the ALLIDB2 database. The concave endpoints and elliptical features are used in the segmentation step of convex-shaped cell images. The procedure involves the extraction of contour evidence, which detects the visible section of each object, and contour estimation, which corresponds to the final object’s contours. Following the identification of the cells and their internal structure by concavity-based segmentation, the cells are categorized based on their morphological and statistical features. The method was evaluated using a public dataset meant to test classification and segmentation approaches. The statistical tool SPSS is used to independently check the significance of derived features. For classification, significant features are passed into machine learning techniques such as support vector machines (SVM), k-nearest neighbor (KNN), neural networks (NN), decision trees (DT), and Nave Bayes (NB). With an AUC of 98.9% and a total accuracy of 95%, the neural network model performed better. We advocate using the neural network model to identify acute leukemia cells based on its accuracy.","PeriodicalId":507997,"journal":{"name":"International Journal of Online and Biomedical Engineering (iJOE)","volume":"3 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141641821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.3991/ijoe.v20i10.49177
Manjit Singh Jadon, Sandeep Kumar
The study aims to investigate the efficacy of titanium dioxide (TiO2) nanoparticle coating on stainless steel 316L (SS 316L) orthopaedic implants to enhance their biocompatibility, osseointegration, and durability. The TiO2 nanoparticles were synthesized via the hydrothermal method and extensively characterized for composition, crystallinity, and morphology using techniques such as X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and scanning electron microscopy (SEM) with energy dispersive X-ray analysis (EDX), corroborated by elemental mapping. SEM and XRD analyses revealed the synthesized nanoparticles have a spherical shape and an average size of approximately 23 nanometres. The synthesized TiO2 nanoparticles were uniformly coated on SS 316L substrates using the spin coating technique, as confirmed by SEM images. Cell viability of the synthesized TiO2 nanoparticles, as well as uncoated and TiO2 nanoparticle-coated SS 316L substrates, was evaluated using the MTT (3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide) assay against the NIH-3T3 mouse embryonic fibroblast cell line. The results demonstrated that the TiO2 nanoparticle-coated SS 316L substrate showed a significant increase of 22.87% in cell viability as compared to the uncoated SS 316L substrate. A ball-on-disc tribometer was employed to assess wear and friction resistance at various speeds, viz., 150 rpm, 300 rpm, and 450 rpm, under 30N load conditions for five minutes. The results collectively indicate a substantial improvement in the performance of TiO2 nanoparticle-coated SS 316L substrates for orthopaedic applications.
本研究旨在探讨二氧化钛(TiO2)纳米粒子涂层在不锈钢 316L (SS 316L)骨科植入物上的功效,以增强其生物相容性、骨结合性和耐用性。通过水热法合成了二氧化钛纳米粒子,并利用 X 射线衍射 (XRD)、傅立叶变换红外光谱 (FTIR)、扫描电子显微镜 (SEM) 和能量色散 X 射线分析 (EDX) 等技术对其成分、结晶度和形态进行了广泛表征,并通过元素图谱进行了证实。扫描电子显微镜和 XRD 分析表明,合成的纳米粒子呈球形,平均尺寸约为 23 纳米。利用旋涂技术将合成的 TiO2 纳米粒子均匀地涂在 SS 316L 基质上,这一点已通过扫描电镜图像得到证实。使用 MTT(3-(4, 5-二甲基噻唑-2-基)-2, 5-二苯基溴化四氮唑)测定法对 NIH-3T3 小鼠胚胎成纤维细胞系评估了合成的 TiO2 纳米粒子以及未涂层和涂层 TiO2 纳米粒子的 SS 316L 基底的细胞活力。结果表明,与未涂覆的 SS 316L 基质相比,涂覆了 TiO2 纳米粒子的 SS 316L 基质的细胞存活率显著提高了 22.87%。在 30N 负载条件下,使用盘上球摩擦仪在不同转速(即 150 rpm、300 rpm 和 450 rpm)下评估磨损和摩擦阻力,持续时间为五分钟。总的结果表明,TiO2 纳米粒子涂层的 SS 316L 基材在矫形外科应用中的性能得到了大幅提高。
{"title":"Fabrication of TiO2 Nanoparticle Coating on Stainless Steel 316L and Its Assessment for Orthopaedic Applications","authors":"Manjit Singh Jadon, Sandeep Kumar","doi":"10.3991/ijoe.v20i10.49177","DOIUrl":"https://doi.org/10.3991/ijoe.v20i10.49177","url":null,"abstract":"The study aims to investigate the efficacy of titanium dioxide (TiO2) nanoparticle coating on stainless steel 316L (SS 316L) orthopaedic implants to enhance their biocompatibility, osseointegration, and durability. The TiO2 nanoparticles were synthesized via the hydrothermal method and extensively characterized for composition, crystallinity, and morphology using techniques such as X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and scanning electron microscopy (SEM) with energy dispersive X-ray analysis (EDX), corroborated by elemental mapping. SEM and XRD analyses revealed the synthesized nanoparticles have a spherical shape and an average size of approximately 23 nanometres. The synthesized TiO2 nanoparticles were uniformly coated on SS 316L substrates using the spin coating technique, as confirmed by SEM images. Cell viability of the synthesized TiO2 nanoparticles, as well as uncoated and TiO2 nanoparticle-coated SS 316L substrates, was evaluated using the MTT (3-(4, 5-dimethylthiazol-2-yl)-2, 5-diphenyltetrazolium bromide) assay against the NIH-3T3 mouse embryonic fibroblast cell line. The results demonstrated that the TiO2 nanoparticle-coated SS 316L substrate showed a significant increase of 22.87% in cell viability as compared to the uncoated SS 316L substrate. A ball-on-disc tribometer was employed to assess wear and friction resistance at various speeds, viz., 150 rpm, 300 rpm, and 450 rpm, under 30N load conditions for five minutes. The results collectively indicate a substantial improvement in the performance of TiO2 nanoparticle-coated SS 316L substrates for orthopaedic applications.","PeriodicalId":507997,"journal":{"name":"International Journal of Online and Biomedical Engineering (iJOE)","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141641690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.3991/ijoe.v20i10.48603
Wiharto, Wimas Tri Harjoko, E. Suryani
Glaucoma is an eye disease that often has no symptoms until it is advanced. According to the World Health Organization (WHO), after cataracts, glaucoma is the second-leading cause of permanent blindness globally and is expected to affect 111.8 million patients by 2040. Early detection of glaucoma is important to reduce the risk of permanent blindness. Detection is achieved by structural measurement of early thinning of the retinal nerve fiber layer (RNFL). The RNFL is the portion of the retina located outside the optic nerve head (ONH) and can be observed in fundus images of the retina. Analysis of retinal fundus images can be performed with computer assistance using machine learning, especially deep learning. This study proposes a deep learning-based model, a convolutional neural network (CNN) using the EfficientNet architecture combined with long short-term memory (LSTM), for laucoma detection. Using ACRIMA, DRISHTI-GS, and RIM-ONE DL datasets with k-fold cross-validation, the model achieved high performance on the ACRIMA dataset: accuracy 0.9799, loss 0.0596, precision 0.9802, sensitivity 0.9799, specificity 0.9771, and F1score 0.9799. This EfficientNet and LSTM combination (e-LSTM) outperformed previous studies, offering a promising alternative for evaluating retinal fundus images in glaucoma detection.
{"title":"e-LSTM: EfficientNet and Long Short-Term Memory Model for Detection of Glaucoma Diseases","authors":"Wiharto, Wimas Tri Harjoko, E. Suryani","doi":"10.3991/ijoe.v20i10.48603","DOIUrl":"https://doi.org/10.3991/ijoe.v20i10.48603","url":null,"abstract":"Glaucoma is an eye disease that often has no symptoms until it is advanced. According to the World Health Organization (WHO), after cataracts, glaucoma is the second-leading cause of permanent blindness globally and is expected to affect 111.8 million patients by 2040. Early detection of glaucoma is important to reduce the risk of permanent blindness. Detection is achieved by structural measurement of early thinning of the retinal nerve fiber layer (RNFL). The RNFL is the portion of the retina located outside the optic nerve head (ONH) and can be observed in fundus images of the retina. Analysis of retinal fundus images can be performed with computer assistance using machine learning, especially deep learning. This study proposes a deep learning-based model, a convolutional neural network (CNN) using the EfficientNet architecture combined with long short-term memory (LSTM), for laucoma detection. Using ACRIMA, DRISHTI-GS, and RIM-ONE DL datasets with k-fold cross-validation, the model achieved high performance on the ACRIMA dataset: accuracy 0.9799, loss 0.0596, precision 0.9802, sensitivity 0.9799, specificity 0.9771, and F1score 0.9799. This EfficientNet and LSTM combination (e-LSTM) outperformed previous studies, offering a promising alternative for evaluating retinal fundus images in glaucoma detection.","PeriodicalId":507997,"journal":{"name":"International Journal of Online and Biomedical Engineering (iJOE)","volume":"1 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141640445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-16DOI: 10.3991/ijoe.v20i10.49089
Hugo Vega-Huerta, Kevin Renzo Pantoja-Pimentel, Sebastian Yimmy Quintanilla-Jaimes, G. Maquen-Niño, Percy De-La-Cruz-VdV, Luis Guerra-Grados
Neurodegenerative disorders, notably Alzheimer’s, pose an escalating global health challenge. Marked by the degeneration of brain neurons, these conditions lead to a gradual decline in nerve cells. Worldwide, over 55 million people grapple with dementia, with Alzheimer’s prominently impacting the aging demographic. The primary hurdle to early Alzheimer’s detection is the widespread lack of awareness. The main goal is to design and implement an artificial intelligence system using deep learning (DL) to detect Alzheimer’s disease (AD) through medical images and classify them into various stages, such as non-demented, moderate dementia, mild dementia, and very mild dementia. The dataset contains 6400 magnetic resonance images in .jpg format, with standardized dimensions of 176 × 208 pixels. To demonstrate the advantages of data augmentation and transformation techniques, four scenarios were created: two without these techniques, utilizing the Adam and SGD optimizers, and two with these techniques, also employing the Adam and SGD optimizers, respectively. The main results revealed that scenarios utilizing these techniques exhibited more stable performance when validated with a new dataset. Scenario 3, using the Adam optimizer, achieved a weighted average accuracy of 91.83%, whereas scenario 4, employing the SGD optimizer, reached 87.58% accuracy. In contrast, scenarios 1 and 2, which omitted these techniques, obtained low accuracies below 55%. It is concluded that classifying AD with a DL model exceeding 90% accuracy is feasible. This is the importance of utilizing data augmentation and transformation techniques to improve generalizability to input image variations, which is a consistent factor in the healthcare sector.
神经退行性疾病,尤其是阿尔茨海默氏症,对全球健康构成了日益严峻的挑战。这些疾病以大脑神经元退化为标志,导致神经细胞逐渐衰退。全球有 5500 多万人患有痴呆症,其中阿尔茨海默氏症对老龄人口的影响尤为突出。早期发现阿尔茨海默氏症的主要障碍是人们普遍缺乏认识。我们的主要目标是利用深度学习(DL)设计并实现一个人工智能系统,通过医学影像检测阿尔茨海默病(AD),并将其分为不同阶段,如非痴呆、中度痴呆、轻度痴呆和极轻度痴呆。数据集包含 6400 张 .jpg 格式的磁共振图像,标准化尺寸为 176 × 208 像素。为了证明数据增强和转换技术的优势,我们创建了四个场景:两个不使用这些技术,但使用了 Adam 和 SGD 优化器;两个使用了这些技术,但也分别使用了 Adam 和 SGD 优化器。主要结果显示,使用这些技术的方案在使用新数据集进行验证时表现出更稳定的性能。方案 3 采用 Adam 优化器,加权平均准确率达到 91.83%,而方案 4 采用 SGD 优化器,准确率达到 87.58%。相比之下,省略了这些技术的方案 1 和方案 2 的准确率较低,低于 55%。由此得出结论,使用准确率超过 90% 的 DL 模型对 AD 进行分类是可行的。这说明了利用数据增强和转换技术提高对输入图像变化的通用性的重要性,而这正是医疗保健领域的一个一贯因素。
{"title":"Classification of Alzheimer’s Disease Based on Deep Learning Using Medical Images","authors":"Hugo Vega-Huerta, Kevin Renzo Pantoja-Pimentel, Sebastian Yimmy Quintanilla-Jaimes, G. Maquen-Niño, Percy De-La-Cruz-VdV, Luis Guerra-Grados","doi":"10.3991/ijoe.v20i10.49089","DOIUrl":"https://doi.org/10.3991/ijoe.v20i10.49089","url":null,"abstract":"Neurodegenerative disorders, notably Alzheimer’s, pose an escalating global health challenge. Marked by the degeneration of brain neurons, these conditions lead to a gradual decline in nerve cells. Worldwide, over 55 million people grapple with dementia, with Alzheimer’s prominently impacting the aging demographic. The primary hurdle to early Alzheimer’s detection is the widespread lack of awareness. The main goal is to design and implement an artificial intelligence system using deep learning (DL) to detect Alzheimer’s disease (AD) through medical images and classify them into various stages, such as non-demented, moderate dementia, mild dementia, and very mild dementia. The dataset contains 6400 magnetic resonance images in .jpg format, with standardized dimensions of 176 × 208 pixels. To demonstrate the advantages of data augmentation and transformation techniques, four scenarios were created: two without these techniques, utilizing the Adam and SGD optimizers, and two with these techniques, also employing the Adam and SGD optimizers, respectively. The main results revealed that scenarios utilizing these techniques exhibited more stable performance when validated with a new dataset. Scenario 3, using the Adam optimizer, achieved a weighted average accuracy of 91.83%, whereas scenario 4, employing the SGD optimizer, reached 87.58% accuracy. In contrast, scenarios 1 and 2, which omitted these techniques, obtained low accuracies below 55%. It is concluded that classifying AD with a DL model exceeding 90% accuracy is feasible. This is the importance of utilizing data augmentation and transformation techniques to improve generalizability to input image variations, which is a consistent factor in the healthcare sector.","PeriodicalId":507997,"journal":{"name":"International Journal of Online and Biomedical Engineering (iJOE)","volume":"18 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141643894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diabetic eye detection has become a major concern across the globe, which could be effectively addressed by automated detection using a deep convolutional neural network (DCNN). CNN models have better detection and classification accuracy than other state-of-theart models. In this paper, a differential evolution (DE)-optimized CNN has been proposed for the single-step classification of diabetic retinopathy (DR) and glaucoma images. DE has been used to find out the optimized values of four hyper-parameters of CNN, i.e., the number of filters in the first layer, the filter size, the number. of convolution layers, and the number of strides. Simulation has been done using three publicly available datasets, and the accuracy obtained is 87.8%, 92.3%, and 88.7%, respectively, which outperforms other models. No other state-of-the-art model has used DE for hyper-parameter tuning in CNN models. Also, no other additional segmentation approach or handcrafted features have been used. The model has been kept simple to reduce computational costs.
{"title":"Empowering Diabetic Eye Disease Detection: Leveraging Differential Evolution for Optimized Convolution Neural Networks","authors":"Rahul Ray, Sudarson Jena, Priyadarsan Parida, Laxminarayan Dash, Sangita Kumari Biswal","doi":"10.3991/ijoe.v20i10.49187","DOIUrl":"https://doi.org/10.3991/ijoe.v20i10.49187","url":null,"abstract":"Diabetic eye detection has become a major concern across the globe, which could be effectively addressed by automated detection using a deep convolutional neural network (DCNN). CNN models have better detection and classification accuracy than other state-of-theart models. In this paper, a differential evolution (DE)-optimized CNN has been proposed for the single-step classification of diabetic retinopathy (DR) and glaucoma images. DE has been used to find out the optimized values of four hyper-parameters of CNN, i.e., the number of filters in the first layer, the filter size, the number. of convolution layers, and the number of strides. Simulation has been done using three publicly available datasets, and the accuracy obtained is 87.8%, 92.3%, and 88.7%, respectively, which outperforms other models. No other state-of-the-art model has used DE for hyper-parameter tuning in CNN models. Also, no other additional segmentation approach or handcrafted features have been used. The model has been kept simple to reduce computational costs.","PeriodicalId":507997,"journal":{"name":"International Journal of Online and Biomedical Engineering (iJOE)","volume":"43 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141643458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-21DOI: 10.3991/ijoe.v20i08.47347
Soha Rawas, A. Samala
In this study, we introduce an innovative approach to significantly enhance the precision and interpretability of brain tumor detection and segmentation. Our method ingeniously integrates the cutting-edge capabilities of the ChatGPT chatbot interface with a state-of-the-art multi-modal convolutional neural network (CNN). Tested rigorously on the BraTS dataset, our method showcases unprecedented performance, outperforming existing techniques in terms of both accuracy and efficiency, with an impressive Dice score of 0.89 for tumor segmentation. By seamlessly integrating ChatGPT, our model unveils deep-seated insights into the intricate decision-making processes, providing researchers and physicians with invaluable understanding and confidence in the results. This groundbreaking fusion holds immense promise, poised to revolutionize the landscape of medical imaging, with far-reaching implications for clinical practice and research. Our study exemplifies the transformative potential achieved through the synergistic combination of multi-modal CNNs and natural language processing, paving the way for remarkable advancements in brain tumor detection and segmentation.
{"title":"Revolutionizing Brain Tumor Analysis: A Fusion of ChatGPT and Multi-Modal CNN for Unprecedented Precision","authors":"Soha Rawas, A. Samala","doi":"10.3991/ijoe.v20i08.47347","DOIUrl":"https://doi.org/10.3991/ijoe.v20i08.47347","url":null,"abstract":"In this study, we introduce an innovative approach to significantly enhance the precision and interpretability of brain tumor detection and segmentation. Our method ingeniously integrates the cutting-edge capabilities of the ChatGPT chatbot interface with a state-of-the-art multi-modal convolutional neural network (CNN). Tested rigorously on the BraTS dataset, our method showcases unprecedented performance, outperforming existing techniques in terms of both accuracy and efficiency, with an impressive Dice score of 0.89 for tumor segmentation. By seamlessly integrating ChatGPT, our model unveils deep-seated insights into the intricate decision-making processes, providing researchers and physicians with invaluable understanding and confidence in the results. This groundbreaking fusion holds immense promise, poised to revolutionize the landscape of medical imaging, with far-reaching implications for clinical practice and research. Our study exemplifies the transformative potential achieved through the synergistic combination of multi-modal CNNs and natural language processing, paving the way for remarkable advancements in brain tumor detection and segmentation.","PeriodicalId":507997,"journal":{"name":"International Journal of Online and Biomedical Engineering (iJOE)","volume":"143 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141114476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}