首页 > 最新文献

Journal of Medical Imaging and Health Informatics最新文献

英文 中文
Medical Imaging and Health Informatics 医学成像和健康信息学
Pub Date : 2022-06-14 DOI: 10.1002/9781119819165
T. Jaware, K. Kumar, R. Badgujar, S. Antonov
Medical imaging and health informatics is a subfield of science and engineering which applies informatics to medicine and includes the study of design, development, and application of computational innovations to improve healthcare. The health domain has a wide range of challenges that can be addressed using computational approaches; therefore, the use of AI and associated technologies is becoming more common in society and healthcare. Currently, deep learning algorithms are a promising option for automated disease detection with high accuracy. Clinical data analysis employing these deep learning algorithms allows physicians to detect diseases earlier and treat patients more efficiently. Since these technologies have the potential to transform many aspects of patient care, disease detection, disease progression and pharmaceutical organization, approaches such as deep learning algorithms, convolutional neural networks, and image processing techniques are explored in this book.
医学成像和健康信息学是科学和工程的一个分支,它将信息学应用于医学,包括研究设计、开发和应用计算创新来改善医疗保健。健康领域有各种各样的挑战,可以使用计算方法来解决;因此,人工智能和相关技术的使用在社会和医疗保健中变得越来越普遍。目前,深度学习算法是高精度自动化疾病检测的一个很有前途的选择。采用这些深度学习算法的临床数据分析使医生能够更早地发现疾病并更有效地治疗患者。由于这些技术有可能改变患者护理、疾病检测、疾病进展和制药组织的许多方面,因此本书探讨了深度学习算法、卷积神经网络和图像处理技术等方法。
{"title":"Medical Imaging and Health Informatics","authors":"T. Jaware, K. Kumar, R. Badgujar, S. Antonov","doi":"10.1002/9781119819165","DOIUrl":"https://doi.org/10.1002/9781119819165","url":null,"abstract":"Medical imaging and health informatics is a subfield of science and engineering which applies informatics to medicine and includes the study of design, development, and application of computational innovations to improve healthcare. The health domain has a wide range of challenges that can be addressed using computational approaches; therefore, the use of AI and associated technologies is becoming more common in society and healthcare. Currently, deep learning algorithms are a promising option for automated disease detection with high accuracy. Clinical data analysis employing these deep learning algorithms allows physicians to detect diseases earlier and treat patients more efficiently. Since these technologies have the potential to transform many aspects of patient care, disease detection, disease progression and pharmaceutical organization, approaches such as deep learning algorithms, convolutional neural networks, and image processing techniques are explored in this book.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86025532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Volume Subtraction Method Using Dual Reconstruction and Additive Technique for Pulmonary Artery/Vein 3DCT Angiography 双重建加性技术在肺动脉/静脉3DCT血管造影中的体积减影方法
Pub Date : 2022-03-01 DOI: 10.1166/jmihi.2022.394
Yoshinori Tanabe, Yuka Tanaka, H. Nagata, Reina Murayama, T. Ishida
This study aimed to develop a method for pulmonary artery and vein (PA/PV) separation in three-dimensional computed tomography (3DCT), using a dual reconstruction technique and the addition of CT images. The physical image properties of multiple reconstruction kernels (FC13; FC13 3D-Q03; FC30 3D-Q03; FC83; FC13 twofold addition; FC13 threefold addition; FC13 fourfold addition; FC13 [3D-Q03] twofold addition; FC13+FC30 (3D-Q03); FC13+FC83) were evaluated based on spatial resolution using a modulation transfer function. The lung kernel CT image (FC 83) had a high spatial resolution with a 10% modulation transfer function (0.847). The noise power spectrum of the additive CT images was measured, and the CT values for the PA/PV with and without addition were compared. The addition of CT images increased the CT values difference between the PA/PV. The PA/PV 3DCT angiography (PA/PV 3DCTA), even with a small difference in CT values, could be effectively separated using high spatial resolution kernel CT and the addition of CT images dedicated to subtraction. This novel, simple method could create PA/PV 3DCTA using a general CT scanner and 3D workstation that can be easily performed at any facility.
本研究旨在建立一种三维计算机断层扫描(3DCT)中肺动脉和肺静脉(PA/PV)分离的方法,该方法采用双重重建技术和CT图像的添加。多重构核的物理图像属性(FC13;FC13 3 d-q03;FC30 3 d-q03;FC83;FC13双加;FC13三倍添加;FC13四倍加法;FC13 [3D-Q03]二次添加;FC13 + FC30 (3 d-q03);利用调制传递函数对FC13+FC83的空间分辨率进行评价。肺核CT图像(FC 83)空间分辨率高,调制传递函数为10%(0.847)。测量加性CT图像的噪声功率谱,比较加性前后PA/PV的CT值。CT图像的添加增加了PA/PV的CT值差。PA/PV 3DCT血管造影(PA/PV 3DCTA),即使CT值相差很小,也可以通过高空间分辨率核CT和添加专用减影的CT图像进行有效分离。这种新颖、简单的方法可以使用普通CT扫描仪和3D工作站创建PA/PV 3DCTA,可以在任何设施轻松完成。
{"title":"Volume Subtraction Method Using Dual Reconstruction and Additive Technique for Pulmonary Artery/Vein 3DCT Angiography","authors":"Yoshinori Tanabe, Yuka Tanaka, H. Nagata, Reina Murayama, T. Ishida","doi":"10.1166/jmihi.2022.394","DOIUrl":"https://doi.org/10.1166/jmihi.2022.394","url":null,"abstract":"This study aimed to develop a method for pulmonary artery and vein (PA/PV) separation in three-dimensional computed tomography (3DCT), using a dual reconstruction technique and the addition of CT images. The physical image properties of multiple reconstruction kernels (FC13; FC13 3D-Q03;\u0000 FC30 3D-Q03; FC83; FC13 twofold addition; FC13 threefold addition; FC13 fourfold addition; FC13 [3D-Q03] twofold addition; FC13+FC30 (3D-Q03); FC13+FC83) were evaluated based on spatial resolution using a modulation transfer function. The lung kernel CT image (FC 83) had a high spatial resolution\u0000 with a 10% modulation transfer function (0.847). The noise power spectrum of the additive CT images was measured, and the CT values for the PA/PV with and without addition were compared. The addition of CT images increased the CT values difference between the PA/PV. The PA/PV 3DCT angiography\u0000 (PA/PV 3DCTA), even with a small difference in CT values, could be effectively separated using high spatial resolution kernel CT and the addition of CT images dedicated to subtraction. This novel, simple method could create PA/PV 3DCTA using a general CT scanner and 3D workstation that can\u0000 be easily performed at any facility.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82169168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Machine Learning Based Probabilistic Classification Model for Heart Disease Prediction 一种新的基于机器学习的心脏病预测概率分类模型
Pub Date : 2022-03-01 DOI: 10.1166/jmihi.2022.3940
A. Ann Romalt, Mathusoothana S. Kumar
Cardiovascular disease (CVD) is most dreadful disease that results in fatal-threats like heart attacks. Accurate disease prediction is very essential and machine-learning techniques contribute a major part in predicting occurrence. In this paper, a novel machine learning based model for accurate prediction of cardiovascular disease is developed that applies unique feature selection technique called Chronic Fatigue Syndrome Best Known Method (CFSBKM). Each feature is ranked based on the feature importance scores. The new learning model eliminates the most irrelevant and low importance features from the datasets thereby resulting in the robust heart disease risk prediction model. The multi-nominal Naive Bayes classifier is used for the classification. The performance of the CFSBKM model is evaluated using the Benchmark dataset Cleveland dataset from UCI repository and the proposed models out-perform the existing techniques.
心血管疾病(CVD)是最可怕的疾病,会导致心脏病发作等致命威胁。准确的疾病预测是非常重要的,机器学习技术在预测疾病发生方面发挥了重要作用。本文提出了一种新的基于机器学习的心血管疾病准确预测模型,该模型采用独特的特征选择技术,称为慢性疲劳综合征最佳已知方法(CFSBKM)。每个特征根据特征的重要性得分进行排名。新的学习模型从数据集中消除了最不相关和低重要性的特征,从而产生了鲁棒的心脏病风险预测模型。采用多标称朴素贝叶斯分类器进行分类。CFSBKM模型的性能使用来自UCI存储库的基准数据集Cleveland数据集进行评估,所提出的模型优于现有技术。
{"title":"A Novel Machine Learning Based Probabilistic Classification Model for Heart Disease Prediction","authors":"A. Ann Romalt, Mathusoothana S. Kumar","doi":"10.1166/jmihi.2022.3940","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3940","url":null,"abstract":"Cardiovascular disease (CVD) is most dreadful disease that results in fatal-threats like heart attacks. Accurate disease prediction is very essential and machine-learning techniques contribute a major part in predicting occurrence. In this paper, a novel machine learning based model\u0000 for accurate prediction of cardiovascular disease is developed that applies unique feature selection technique called Chronic Fatigue Syndrome Best Known Method (CFSBKM). Each feature is ranked based on the feature importance scores. The new learning model eliminates the most irrelevant and\u0000 low importance features from the datasets thereby resulting in the robust heart disease risk prediction model. The multi-nominal Naive Bayes classifier is used for the classification. The performance of the CFSBKM model is evaluated using the Benchmark dataset Cleveland dataset from UCI repository\u0000 and the proposed models out-perform the existing techniques.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89125787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Rapid Dual Feature Tracking Method for Medical Equipments Assembly and Disassembly in Markerless Augmented Reality 无标记增强现实中医疗设备拆装快速双特征跟踪方法
Pub Date : 2022-03-01 DOI: 10.1166/jmihi.2022.3944
D. Roopa, S. Bose
Markerless Augmented Reality (MAR) is a superior technology that is currently used by the medical device assembler with aid in design, assembly, disassembly and maintenance operations. The medical assembler assembles the medical equipment based on the doctors requirement, they also maintains quality and sanitation of the equipment. The major research challenges in MAR are as follows: establish automatic registration parts, find and track the orientation of parts, and lack of depth and visual features. This work proposes a rapid dual feature tracking method i.e., combination of Visual Simultaneous Localization and Mapping (SLAM) and Matched Pairs Selection (MAPSEL). The main idea of this work is to attain high tracking accuracy using the combined method. To get a good depth image map, a Graph-Based Joint Bilateral with Sharpening Filter (GRB-JBF with SF) is proposed since depth images are noisy due to the dynamic change of environmental factors that affects tracking accuracy. Then, the best feature points are obtained for matching using Oriented Fast and Rotated Brief (ORB) as a feature detector, Fast Retina Key point with Histogram of Gradients (FREAK-HoG) as a feature descriptor, and Feature Matching using Rajsk’s distance. Finally, the virtual object is rendered based on 3D affine and projection transformation. This work computes the performance in terms of tracking accuracy, tracking time, and rotation error for different distances using MATLAB R2017b. From the observed results, it is perceived that the proposed method attained the least position error value about 0.1 cm to 0.3 cm. Also, rotation error is observed as minimal between 2.40 (Deg) to 3.10 and its average scale is observed as 2.7140. Further, the proposed combination consumes less time against frames compared with other combinations and obtained a higher tracking accuracy of about 95.14% for 180 tracked points. The witnessed outcomes from the proposed scheme display superior performance compared with existing methods.
无标记增强现实(MAR)是一项卓越的技术,目前被医疗设备组装商用于辅助设计、组装、拆卸和维护操作。医疗装配工根据医生的要求组装医疗设备,并维护设备的质量和卫生。MAR的主要研究挑战是:建立自动配准零件,查找和跟踪零件的方向,缺乏深度和视觉特征。本文提出了一种快速双特征跟踪方法,即视觉同步定位与映射(SLAM)和匹配对选择(MAPSEL)相结合。本工作的主要思想是利用组合方法获得较高的跟踪精度。为了获得良好的深度图像映射,针对深度图像受环境因素动态变化影响而存在噪声的问题,提出了一种基于图的联合双边与锐化滤波器(GRB-JBF with SF)。然后,以定向快速旋转摘要(ORB)作为特征检测器,以梯度直方图(FREAK-HoG)作为特征描述符,利用Rajsk距离进行特征匹配,获得最佳特征点进行匹配。最后,基于三维仿射变换和投影变换对虚拟物体进行渲染。本文使用MATLAB R2017b计算了不同距离下的跟踪精度、跟踪时间和旋转误差的性能。从观测结果可以看出,该方法的位置误差最小,约为0.1 ~ 0.3 cm。此外,旋转误差在2.40(度)至3.10之间最小,其平均尺度为2.7140。此外,与其他组合相比,该组合对帧消耗的时间更少,对180个跟踪点的跟踪精度达到95.14%左右。与现有方法相比,该方案的实测结果显示出更好的性能。
{"title":"A Rapid Dual Feature Tracking Method for Medical Equipments Assembly and Disassembly in Markerless Augmented Reality","authors":"D. Roopa, S. Bose","doi":"10.1166/jmihi.2022.3944","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3944","url":null,"abstract":"Markerless Augmented Reality (MAR) is a superior technology that is currently used by the medical device assembler with aid in design, assembly, disassembly and maintenance operations. The medical assembler assembles the medical equipment based on the doctors requirement, they also\u0000 maintains quality and sanitation of the equipment. The major research challenges in MAR are as follows: establish automatic registration parts, find and track the orientation of parts, and lack of depth and visual features. This work proposes a rapid dual feature tracking method i.e., combination\u0000 of Visual Simultaneous Localization and Mapping (SLAM) and Matched Pairs Selection (MAPSEL). The main idea of this work is to attain high tracking accuracy using the combined method. To get a good depth image map, a Graph-Based Joint Bilateral with Sharpening Filter (GRB-JBF with SF) is proposed\u0000 since depth images are noisy due to the dynamic change of environmental factors that affects tracking accuracy. Then, the best feature points are obtained for matching using Oriented Fast and Rotated Brief (ORB) as a feature detector, Fast Retina Key point with Histogram of Gradients (FREAK-HoG)\u0000 as a feature descriptor, and Feature Matching using Rajsk’s distance. Finally, the virtual object is rendered based on 3D affine and projection transformation. This work computes the performance in terms of tracking accuracy, tracking time, and rotation error for different distances\u0000 using MATLAB R2017b. From the observed results, it is perceived that the proposed method attained the least position error value about 0.1 cm to 0.3 cm. Also, rotation error is observed as minimal between 2.40 (Deg) to 3.10 and its average scale is observed as 2.7140. Further, the proposed\u0000 combination consumes less time against frames compared with other combinations and obtained a higher tracking accuracy of about 95.14% for 180 tracked points. The witnessed outcomes from the proposed scheme display superior performance compared with existing methods.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"93 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78393018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated Multimodal Fusion Based Hyperparameter Tuned Deep Learning Model for Brain Tumor Diagnosis 基于自动多模态融合的超参数调谐深度学习脑肿瘤诊断模型
Pub Date : 2022-03-01 DOI: 10.1166/jmihi.2022.3942
S. Sandhya, M. Senthil Kumar
As medical image processing research has progressed, image fusion has emerged as a realistic solution, automatically extracting relevant data from many images before fusing them into a single, unified image. Medical imaging techniques, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), etc., play a crucial role in the diagnosis and classification of brain tumors (BT). A single imaging technique is not sufficient for correct diagnosis of the disease. In case the scans are ambiguous, it can lead doctors to incorrect diagnoses, which can be unsafe to the patient. The solution to this problem is fusing images from different scans containing complementary information to generate accurate images with minimum uncertainty. This research presents a novel method for the automated identification and classification of brain tumors using multi-modal deep learning (AMDL-BTDC). The proposed AMDL-BTDC model initially performs image pre-processing using bilateral filtering (BF) technique. Next, feature vectors are generated using a pair of pre-trained deep learning models called EfficientNet and SqueezeNet. Slime Mold Algorithm is used to acquire the DL models’ optimal hyperparameter settings (SMA). In the end, an autoencoder (AE) model is used for BT classification once features have been fused. The suggested model’s superior performance over other techniques under diverse measures was validated by extensive testing on the benchmark medical imaging dataset.
随着医学图像处理研究的不断深入,图像融合已经成为一种现实的解决方案,自动从许多图像中提取相关数据,然后将它们融合成一个统一的图像。计算机断层扫描(CT)、磁共振成像(MRI)等医学成像技术在脑肿瘤(BT)的诊断和分类中起着至关重要的作用。单一的影像技术不足以正确诊断本病。如果扫描结果不明确,可能会导致医生做出错误的诊断,这对病人来说可能是不安全的。解决这一问题的方法是融合来自不同扫描图像的互补信息,以最小的不确定性生成准确的图像。提出了一种基于多模态深度学习(AMDL-BTDC)的脑肿瘤自动识别与分类新方法。提出的AMDL-BTDC模型首先使用双边滤波(BF)技术进行图像预处理。接下来,使用一对称为EfficientNet和SqueezeNet的预训练深度学习模型生成特征向量。采用黏菌算法获取深度学习模型的最优超参数设置(SMA)。最后,将特征融合后的自编码器(AE)模型用于BT分类。在基准医学成像数据集上进行了广泛的测试,验证了该模型在不同度量下优于其他技术的性能。
{"title":"Automated Multimodal Fusion Based Hyperparameter Tuned Deep Learning Model for Brain Tumor Diagnosis","authors":"S. Sandhya, M. Senthil Kumar","doi":"10.1166/jmihi.2022.3942","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3942","url":null,"abstract":"As medical image processing research has progressed, image fusion has emerged as a realistic solution, automatically extracting relevant data from many images before fusing them into a single, unified image. Medical imaging techniques, such as Computed Tomography (CT), Magnetic Resonance\u0000 Imaging (MRI), etc., play a crucial role in the diagnosis and classification of brain tumors (BT). A single imaging technique is not sufficient for correct diagnosis of the disease. In case the scans are ambiguous, it can lead doctors to incorrect diagnoses, which can be unsafe to the patient.\u0000 The solution to this problem is fusing images from different scans containing complementary information to generate accurate images with minimum uncertainty. This research presents a novel method for the automated identification and classification of brain tumors using multi-modal deep learning\u0000 (AMDL-BTDC). The proposed AMDL-BTDC model initially performs image pre-processing using bilateral filtering (BF) technique. Next, feature vectors are generated using a pair of pre-trained deep learning models called EfficientNet and SqueezeNet. Slime Mold Algorithm is used to acquire the DL\u0000 models’ optimal hyperparameter settings (SMA). In the end, an autoencoder (AE) model is used for BT classification once features have been fused. The suggested model’s superior performance over other techniques under diverse measures was validated by extensive testing on the benchmark\u0000 medical imaging dataset.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91220021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recurrent Neural Network Deep Learning Techniques for Brain Tumor Segmentation and Classification of Magnetic Resonance Imaging Images 脑肿瘤磁共振成像图像分割与分类的递归神经网络深度学习技术
Pub Date : 2022-03-01 DOI: 10.1166/jmihi.2022.3943
Meenal Thayumanavan, Asokan Ramasamy
Brain Tumour is a one of the most threatful disease in the world. It reduces the life span of human beings. Computer vision is advantageous for human health research because it eliminates the need for human judgement to get accurate data. The most reliable and secure imaging techniques for magnetic resonance imaging are CT scans, X-rays, and MRI scans (MRI). MRI can locate tiny objects. The focus of our paper will be the many techniques for detecting brain cancer using brain MRI. Early detection of tumour and diagnosis is might essential to radiologist to initiate better treatment. MRI is a competent and speedy method of examining a brain tumour. Resonance in Magnetic Fields Imaging technology is a non-invasive technique that aids in the segmentation of brain tumour images. Deep learning algorithm delivers good outcomes in terms of reducing time consumption and precise tumour diagnosis (solution). This research proposed that a Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) Supervised Deep Learning model be used to automatically find and split brain tumours. The RNN Model outperforms the CNN Model by 98.91 percentage. These models categorize brain images as normal or pathological, and their performance was evaluated.
脑瘤是世界上最具威胁性的疾病之一。它缩短了人类的寿命。计算机视觉对人类健康研究是有利的,因为它不需要人类判断来获得准确的数据。磁共振成像最可靠、最安全的成像技术是CT扫描、x射线和核磁共振扫描(MRI)。核磁共振成像可以定位微小物体。本文的重点将是利用脑MRI检测脑癌的许多技术。早期发现肿瘤和诊断可能是必不可少的放射科医生开始更好的治疗。核磁共振成像是一种有效而快速的脑肿瘤检查方法。磁共振磁场成像技术是一种非侵入性的技术,有助于分割脑肿瘤图像。深度学习算法在减少时间消耗和精确肿瘤诊断(解决方案)方面取得了良好的效果。本研究提出使用卷积神经网络(CNN)和循环神经网络(RNN)监督深度学习模型自动发现和分割脑肿瘤。RNN模型的性能比CNN模型高出98.91%。这些模型将大脑图像分类为正常或病理,并对其性能进行评估。
{"title":"Recurrent Neural Network Deep Learning Techniques for Brain Tumor Segmentation and Classification of Magnetic Resonance Imaging Images","authors":"Meenal Thayumanavan, Asokan Ramasamy","doi":"10.1166/jmihi.2022.3943","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3943","url":null,"abstract":"Brain Tumour is a one of the most threatful disease in the world. It reduces the life span of human beings. Computer vision is advantageous for human health research because it eliminates the need for human judgement to get accurate data. The most reliable and secure imaging techniques\u0000 for magnetic resonance imaging are CT scans, X-rays, and MRI scans (MRI). MRI can locate tiny objects. The focus of our paper will be the many techniques for detecting brain cancer using brain MRI. Early detection of tumour and diagnosis is might essential to radiologist to initiate better\u0000 treatment. MRI is a competent and speedy method of examining a brain tumour. Resonance in Magnetic Fields Imaging technology is a non-invasive technique that aids in the segmentation of brain tumour images. Deep learning algorithm delivers good outcomes in terms of reducing time consumption\u0000 and precise tumour diagnosis (solution). This research proposed that a Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) Supervised Deep Learning model be used to automatically find and split brain tumours. The RNN Model outperforms the CNN Model by 98.91 percentage. These\u0000 models categorize brain images as normal or pathological, and their performance was evaluated.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88924038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Convolutional Neural Network-BO Based Feature Extraction and Multi-Layer Neural Network-SR Based Classification for Facial Expression Recognition 基于卷积神经网络- bo特征提取和基于多层神经网络- sr分类的面部表情识别
Pub Date : 2022-03-01 DOI: 10.1166/jmihi.2022.3938
K. Pandikumar, K. Senthamil Selvan, B. Sowmya, A. Niranjil Kumar
Facial expression recognition has been more essential in artificial machine intelligence systems in recent years. Recognizing facial expressions automatically has constantly been considered as a challenging task since people significantly vary the way of exhibiting their facial expressions. Numerous researchers established diverse approaches to analyze the facial expressions automatically but there arise few imprecision issues during facial recognition. To address such shortcomings, our proposed approach recognizes the facial expressions of humans in an effective manner. The suggested method is divided into three stages: pre-processing, feature extraction, and classification. The inputs are pre-processed at the initial stage and CNN-BO algorithm is used to extract the best feature in the feature extraction step. Then the extracted feature is provided to the classification stage where MNN-SR algorithm is employed in classifying the face expression as joyful, miserable, normal, annoyance, astonished and frightened. Also, the parameters are tuned effectively to obtain high recognition accuracy. In addition to this, the performances of the proposed approach are computed by employing three various datasets namely; CMU/VASC, Caltech faces 1999, JAFFE and XM2VTS. The performance of the proposed system is calculated and comparative analysis is made with few other existing approaches and its concluded that the proposed method provides superior performance with optimal recognition rate.
近年来,面部表情识别在人工智能系统中越来越重要。自动识别面部表情一直被认为是一项具有挑战性的任务,因为人们表现面部表情的方式有很大的不同。许多研究者建立了各种自动分析面部表情的方法,但在面部识别过程中存在一些不精确的问题。为了解决这些缺点,我们提出的方法可以有效地识别人类的面部表情。该方法分为预处理、特征提取和分类三个阶段。在初始阶段对输入进行预处理,在特征提取步骤中使用CNN-BO算法提取最佳特征。然后将提取的特征提供给分类阶段,利用MNN-SR算法对人脸表情进行快乐、痛苦、正常、烦恼、惊讶和恐惧的分类。同时,对参数进行了有效的调整,获得了较高的识别精度。此外,该方法的性能是通过使用三个不同的数据集来计算的,即;CMU/VASC,加州理工面临1999,JAFFE和XM2VTS。计算了该方法的性能,并与现有的几种方法进行了比较分析,结果表明该方法具有较好的识别率和较好的性能。
{"title":"Convolutional Neural Network-BO Based Feature Extraction and Multi-Layer Neural Network-SR Based Classification for Facial Expression Recognition","authors":"K. Pandikumar, K. Senthamil Selvan, B. Sowmya, A. Niranjil Kumar","doi":"10.1166/jmihi.2022.3938","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3938","url":null,"abstract":"Facial expression recognition has been more essential in artificial machine intelligence systems in recent years. Recognizing facial expressions automatically has constantly been considered as a challenging task since people significantly vary the way of exhibiting their facial expressions.\u0000 Numerous researchers established diverse approaches to analyze the facial expressions automatically but there arise few imprecision issues during facial recognition. To address such shortcomings, our proposed approach recognizes the facial expressions of humans in an effective manner. The\u0000 suggested method is divided into three stages: pre-processing, feature extraction, and classification. The inputs are pre-processed at the initial stage and CNN-BO algorithm is used to extract the best feature in the feature extraction step. Then the extracted feature is provided to the classification\u0000 stage where MNN-SR algorithm is employed in classifying the face expression as joyful, miserable, normal, annoyance, astonished and frightened. Also, the parameters are tuned effectively to obtain high recognition accuracy. In addition to this, the performances of the proposed approach are\u0000 computed by employing three various datasets namely; CMU/VASC, Caltech faces 1999, JAFFE and XM2VTS. The performance of the proposed system is calculated and comparative analysis is made with few other existing approaches and its concluded that the proposed method provides superior\u0000 performance with optimal recognition rate.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88721680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning-Based Electrocardiogram Signal Analysis for Abnormalities Detection Using Hybrid Cascade Feed Forward Backpropagation with Ant Colony Optimization Technique 基于深度学习的心电图信号分析与蚁群优化技术
Pub Date : 2022-03-01 DOI: 10.1166/jmihi.2022.3945
C. Ganesh, B. Sathiyabhama
In this paper, a time series data mining models is introduced for analysis of ECG data for prior identification of heart attacks. The ECG data sets extracted from Physionet are simulated in MATLAB. The Data used for model are preprocessed so that missing data are fulfilled. In this work cascade feedforward NN which is similar to Multilayer Perceptron (MLP) architecture is proposed along with Swarm Intelligence. A hybrid method combining cascade-Forward NN Classifier and Ant colony optimization is proposed in this paper. The swarm-based intelligence method optimizes the weight adjustment of neural network and enhances the convergence behavior. The novelty is on the optimization of the NN parameters for narrowing down the convergence with ACO implementation. Ant colony optimization is used here for choosing the optimized hidden node. The combined use of machine learning algorithm with neural network enhances the performance of the system. The performance is evaluated using parameters like True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) respectively. The Improved accuracy of proposed Classifier model raises the speed. In addition, the proposed method uses minimum memory. The implementation was done in MATLAB tool. Real time data was used.
本文介绍了一种时间序列数据挖掘模型,用于心电数据的分析,以提前识别心脏病发作。利用MATLAB对从Physionet中提取的心电数据集进行仿真。对用于建模的数据进行预处理,以弥补缺失的数据。本文结合群智能,提出了一种类似多层感知器(MLP)结构的级联前馈神经网络。提出了一种将级联前向神经网络分类器与蚁群优化相结合的混合方法。基于群体的智能方法优化了神经网络的权值调整,增强了神经网络的收敛性。新颖之处在于对神经网络参数的优化,以缩小蚁群算法的收敛性。本文采用蚁群算法选择优化后的隐节点。机器学习算法与神经网络的结合使用提高了系统的性能。性能分别使用真阳性(TP)、真阴性(TN)、假阳性(FP)和假阴性(FN)等参数进行评估。该分类器模型精度的提高提高了分类速度。此外,该方法使用最小的内存。在MATLAB工具中实现。采用实时数据。
{"title":"Deep Learning-Based Electrocardiogram Signal Analysis for Abnormalities Detection Using Hybrid Cascade Feed Forward Backpropagation with Ant Colony Optimization Technique","authors":"C. Ganesh, B. Sathiyabhama","doi":"10.1166/jmihi.2022.3945","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3945","url":null,"abstract":"In this paper, a time series data mining models is introduced for analysis of ECG data for prior identification of heart attacks. The ECG data sets extracted from Physionet are simulated in MATLAB. The Data used for model are preprocessed so that missing data are fulfilled. In this\u0000 work cascade feedforward NN which is similar to Multilayer Perceptron (MLP) architecture is proposed along with Swarm Intelligence. A hybrid method combining cascade-Forward NN Classifier and Ant colony optimization is proposed in this paper. The swarm-based intelligence method optimizes the\u0000 weight adjustment of neural network and enhances the convergence behavior. The novelty is on the optimization of the NN parameters for narrowing down the convergence with ACO implementation. Ant colony optimization is used here for choosing the optimized hidden node. The combined use of machine\u0000 learning algorithm with neural network enhances the performance of the system. The performance is evaluated using parameters like True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) respectively. The Improved accuracy of proposed Classifier model raises the\u0000 speed. In addition, the proposed method uses minimum memory. The implementation was done in MATLAB tool. Real time data was used.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80710322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Liver Cancer Detection and Classification Using Raspberry Pi 基于树莓派的肝癌检测与分类
Pub Date : 2022-03-01 DOI: 10.1166/jmihi.2022.3941
T. K. R. Agita, M. Moorthi
In practical radiology, early diagnosis and precise categorization of liver cancer are difficult issues. Manual segmentation is also a time-consuming process. So, utilizing various methodologies based on an embedded system, we detect liver cancer from abdominal CT images using automated liver cancer segmentation and classification. The objective is to categorize CT scan images of primary and secondary liver disease using a Back Propagation Neural Network (BPNN) classifier, which has greater accuracy than previous approaches. In this work, a newly proposed method is shown which has four phases: image preprocessing, image segmentation, extraction of the features, and classification of the liver. Level set segmentation for segmenting the liver from abdominal CT images and Practical Swarm Optimization (PSO) for the tumor segmentation. Then the features from the liver are extracted and given to the BPNN classifier to classify the liver cancer. These algorithms are implemented on the Raspberry Pi. Then it serially interfaces with the MAX3232 protocol via serial communication. The GSM 800C module is connected to the system to send SMS as primary or secondary cancer. The BPNN classification technique achieved an excellent accuracy of 97.98%. The experimental results demonstrate the efficiency of this proposed approach, which provides excellent accuracy with good results.
在实际放射学中,肝癌的早期诊断和精确分类是一个难题。人工分割也是一个耗时的过程。因此,利用基于嵌入式系统的各种方法,我们使用自动肝癌分割和分类从腹部CT图像中检测肝癌。目的是使用反向传播神经网络(BPNN)分类器对原发性和继发性肝脏疾病的CT扫描图像进行分类,该分类器比以前的方法具有更高的准确性。本文提出了一种基于图像预处理、图像分割、特征提取和肝脏分类的新方法。用水平集分割腹部CT图像中的肝脏,用PSO算法分割肿瘤。然后从肝脏中提取特征并将其输入到BPNN分类器中进行肝癌分类。这些算法是在树莓派上实现的。然后通过串行通信与MAX3232协议进行串行接口。GSM 800C模块与系统相连,用于发送原发性或继发性癌症短信。BPNN分类技术的准确率达到了97.98%。实验结果证明了该方法的有效性,具有良好的精度和效果。
{"title":"Liver Cancer Detection and Classification Using Raspberry Pi","authors":"T. K. R. Agita, M. Moorthi","doi":"10.1166/jmihi.2022.3941","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3941","url":null,"abstract":"In practical radiology, early diagnosis and precise categorization of liver cancer are difficult issues. Manual segmentation is also a time-consuming process. So, utilizing various methodologies based on an embedded system, we detect liver cancer from abdominal CT images using automated\u0000 liver cancer segmentation and classification. The objective is to categorize CT scan images of primary and secondary liver disease using a Back Propagation Neural Network (BPNN) classifier, which has greater accuracy than previous approaches. In this work, a newly proposed method is shown\u0000 which has four phases: image preprocessing, image segmentation, extraction of the features, and classification of the liver. Level set segmentation for segmenting the liver from abdominal CT images and Practical Swarm Optimization (PSO) for the tumor segmentation. Then the features from the\u0000 liver are extracted and given to the BPNN classifier to classify the liver cancer. These algorithms are implemented on the Raspberry Pi. Then it serially interfaces with the MAX3232 protocol via serial communication. The GSM 800C module is connected to the system to send SMS as primary or\u0000 secondary cancer. The BPNN classification technique achieved an excellent accuracy of 97.98%. The experimental results demonstrate the efficiency of this proposed approach, which provides excellent accuracy with good results.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"27 4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77983768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Fundus Images Using Convolutional Neural Networks 基于卷积神经网络的眼底图像分类
Pub Date : 2022-03-01 DOI: 10.1166/jmihi.2022.3947
R. Sabitha, G. Ramani
Diabetes causes damage to the retinal blood vessel networks, resulting in Diabetic Retinopathy (DR). This is a serious vision-threatening condition for most diabetics. Color fundus photographs are utilized to diagnose DR, which necessitates the employment of qualified clinicians to detect the presence of lesions. It is difficult to identify DR in an automated method. Feature extraction is quite important in terms of automated sickness detection. Convolutional Neural Network (CNN) exceeds previous handcrafted feature-based image classification algorithms in terms of picture classification efficiency in the current environment. In order to improve classification accuracy, this work presents the CNN structure for extracting attributes from retinal fundus images. The output properties of CNN are given as input to different machine learning classifiers in this recommended strategy. This approach is evaluating using pictures from the EYEPACS datasets using Decision stump, J48 and Random Forest classifiers. To determine the effectiveness of a classifier, its accuracy, false positive rate (FPR), True positive Rate (TPR), precision, recall, F-measure, and Kappa-score are illustrated. The recommended feature extraction strategy paired with the Random forest classifier outperforms all other classifiers on the EYEPACS datasets, with average accuracy and Kappa-score (k-score) of 99% and 0.98 respectively.
糖尿病引起视网膜血管网络损伤,导致糖尿病性视网膜病变(DR)。对大多数糖尿病患者来说,这是一种严重的视力威胁。彩色眼底照片用于诊断DR,这就需要聘请合格的临床医生来检测病变的存在。用自动化方法识别DR是困难的。特征提取在自动疾病检测中非常重要。卷积神经网络(CNN)在当前环境下的图像分类效率超过了以往手工制作的基于特征的图像分类算法。为了提高分类精度,本文提出了一种用于提取视网膜眼底图像属性的CNN结构。在这个推荐的策略中,CNN的输出属性作为输入给不同的机器学习分类器。该方法使用Decision stump、J48和Random Forest分类器对来自EYEPACS数据集的图片进行评估。为了确定分类器的有效性,说明了其准确性,假阳性率(FPR),真阳性率(TPR),精度,召回率,f测量和kappa评分。推荐的特征提取策略与随机森林分类器配对,在EYEPACS数据集上优于所有其他分类器,平均准确率和k-score (k-score)分别为99%和0.98。
{"title":"Classification of Fundus Images Using Convolutional Neural Networks","authors":"R. Sabitha, G. Ramani","doi":"10.1166/jmihi.2022.3947","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3947","url":null,"abstract":"Diabetes causes damage to the retinal blood vessel networks, resulting in Diabetic Retinopathy (DR). This is a serious vision-threatening condition for most diabetics. Color fundus photographs are utilized to diagnose DR, which necessitates the employment of qualified clinicians to\u0000 detect the presence of lesions. It is difficult to identify DR in an automated method. Feature extraction is quite important in terms of automated sickness detection. Convolutional Neural Network (CNN) exceeds previous handcrafted feature-based image classification algorithms in terms of picture\u0000 classification efficiency in the current environment. In order to improve classification accuracy, this work presents the CNN structure for extracting attributes from retinal fundus images. The output properties of CNN are given as input to different machine learning classifiers in this recommended\u0000 strategy. This approach is evaluating using pictures from the EYEPACS datasets using Decision stump, J48 and Random Forest classifiers. To determine the effectiveness of a classifier, its accuracy, false positive rate (FPR), True positive Rate (TPR), precision, recall, F-measure, and Kappa-score\u0000 are illustrated. The recommended feature extraction strategy paired with the Random forest classifier outperforms all other classifiers on the EYEPACS datasets, with average accuracy and Kappa-score (k-score) of 99% and 0.98 respectively.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90018205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Medical Imaging and Health Informatics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1