Medical imaging and health informatics is a subfield of science and engineering which applies informatics to medicine and includes the study of design, development, and application of computational innovations to improve healthcare. The health domain has a wide range of challenges that can be addressed using computational approaches; therefore, the use of AI and associated technologies is becoming more common in society and healthcare. Currently, deep learning algorithms are a promising option for automated disease detection with high accuracy. Clinical data analysis employing these deep learning algorithms allows physicians to detect diseases earlier and treat patients more efficiently. Since these technologies have the potential to transform many aspects of patient care, disease detection, disease progression and pharmaceutical organization, approaches such as deep learning algorithms, convolutional neural networks, and image processing techniques are explored in this book.
{"title":"Medical Imaging and Health Informatics","authors":"T. Jaware, K. Kumar, R. Badgujar, S. Antonov","doi":"10.1002/9781119819165","DOIUrl":"https://doi.org/10.1002/9781119819165","url":null,"abstract":"Medical imaging and health informatics is a subfield of science and engineering which applies informatics to medicine and includes the study of design, development, and application of computational innovations to improve healthcare. The health domain has a wide range of challenges that can be addressed using computational approaches; therefore, the use of AI and associated technologies is becoming more common in society and healthcare. Currently, deep learning algorithms are a promising option for automated disease detection with high accuracy. Clinical data analysis employing these deep learning algorithms allows physicians to detect diseases earlier and treat patients more efficiently. Since these technologies have the potential to transform many aspects of patient care, disease detection, disease progression and pharmaceutical organization, approaches such as deep learning algorithms, convolutional neural networks, and image processing techniques are explored in this book.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86025532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yoshinori Tanabe, Yuka Tanaka, H. Nagata, Reina Murayama, T. Ishida
This study aimed to develop a method for pulmonary artery and vein (PA/PV) separation in three-dimensional computed tomography (3DCT), using a dual reconstruction technique and the addition of CT images. The physical image properties of multiple reconstruction kernels (FC13; FC13 3D-Q03; FC30 3D-Q03; FC83; FC13 twofold addition; FC13 threefold addition; FC13 fourfold addition; FC13 [3D-Q03] twofold addition; FC13+FC30 (3D-Q03); FC13+FC83) were evaluated based on spatial resolution using a modulation transfer function. The lung kernel CT image (FC 83) had a high spatial resolution with a 10% modulation transfer function (0.847). The noise power spectrum of the additive CT images was measured, and the CT values for the PA/PV with and without addition were compared. The addition of CT images increased the CT values difference between the PA/PV. The PA/PV 3DCT angiography (PA/PV 3DCTA), even with a small difference in CT values, could be effectively separated using high spatial resolution kernel CT and the addition of CT images dedicated to subtraction. This novel, simple method could create PA/PV 3DCTA using a general CT scanner and 3D workstation that can be easily performed at any facility.
{"title":"Volume Subtraction Method Using Dual Reconstruction and Additive Technique for Pulmonary Artery/Vein 3DCT Angiography","authors":"Yoshinori Tanabe, Yuka Tanaka, H. Nagata, Reina Murayama, T. Ishida","doi":"10.1166/jmihi.2022.394","DOIUrl":"https://doi.org/10.1166/jmihi.2022.394","url":null,"abstract":"This study aimed to develop a method for pulmonary artery and vein (PA/PV) separation in three-dimensional computed tomography (3DCT), using a dual reconstruction technique and the addition of CT images. The physical image properties of multiple reconstruction kernels (FC13; FC13 3D-Q03;\u0000 FC30 3D-Q03; FC83; FC13 twofold addition; FC13 threefold addition; FC13 fourfold addition; FC13 [3D-Q03] twofold addition; FC13+FC30 (3D-Q03); FC13+FC83) were evaluated based on spatial resolution using a modulation transfer function. The lung kernel CT image (FC 83) had a high spatial resolution\u0000 with a 10% modulation transfer function (0.847). The noise power spectrum of the additive CT images was measured, and the CT values for the PA/PV with and without addition were compared. The addition of CT images increased the CT values difference between the PA/PV. The PA/PV 3DCT angiography\u0000 (PA/PV 3DCTA), even with a small difference in CT values, could be effectively separated using high spatial resolution kernel CT and the addition of CT images dedicated to subtraction. This novel, simple method could create PA/PV 3DCTA using a general CT scanner and 3D workstation that can\u0000 be easily performed at any facility.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82169168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cardiovascular disease (CVD) is most dreadful disease that results in fatal-threats like heart attacks. Accurate disease prediction is very essential and machine-learning techniques contribute a major part in predicting occurrence. In this paper, a novel machine learning based model for accurate prediction of cardiovascular disease is developed that applies unique feature selection technique called Chronic Fatigue Syndrome Best Known Method (CFSBKM). Each feature is ranked based on the feature importance scores. The new learning model eliminates the most irrelevant and low importance features from the datasets thereby resulting in the robust heart disease risk prediction model. The multi-nominal Naive Bayes classifier is used for the classification. The performance of the CFSBKM model is evaluated using the Benchmark dataset Cleveland dataset from UCI repository and the proposed models out-perform the existing techniques.
{"title":"A Novel Machine Learning Based Probabilistic Classification Model for Heart Disease Prediction","authors":"A. Ann Romalt, Mathusoothana S. Kumar","doi":"10.1166/jmihi.2022.3940","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3940","url":null,"abstract":"Cardiovascular disease (CVD) is most dreadful disease that results in fatal-threats like heart attacks. Accurate disease prediction is very essential and machine-learning techniques contribute a major part in predicting occurrence. In this paper, a novel machine learning based model\u0000 for accurate prediction of cardiovascular disease is developed that applies unique feature selection technique called Chronic Fatigue Syndrome Best Known Method (CFSBKM). Each feature is ranked based on the feature importance scores. The new learning model eliminates the most irrelevant and\u0000 low importance features from the datasets thereby resulting in the robust heart disease risk prediction model. The multi-nominal Naive Bayes classifier is used for the classification. The performance of the CFSBKM model is evaluated using the Benchmark dataset Cleveland dataset from UCI repository\u0000 and the proposed models out-perform the existing techniques.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89125787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markerless Augmented Reality (MAR) is a superior technology that is currently used by the medical device assembler with aid in design, assembly, disassembly and maintenance operations. The medical assembler assembles the medical equipment based on the doctors requirement, they also maintains quality and sanitation of the equipment. The major research challenges in MAR are as follows: establish automatic registration parts, find and track the orientation of parts, and lack of depth and visual features. This work proposes a rapid dual feature tracking method i.e., combination of Visual Simultaneous Localization and Mapping (SLAM) and Matched Pairs Selection (MAPSEL). The main idea of this work is to attain high tracking accuracy using the combined method. To get a good depth image map, a Graph-Based Joint Bilateral with Sharpening Filter (GRB-JBF with SF) is proposed since depth images are noisy due to the dynamic change of environmental factors that affects tracking accuracy. Then, the best feature points are obtained for matching using Oriented Fast and Rotated Brief (ORB) as a feature detector, Fast Retina Key point with Histogram of Gradients (FREAK-HoG) as a feature descriptor, and Feature Matching using Rajsk’s distance. Finally, the virtual object is rendered based on 3D affine and projection transformation. This work computes the performance in terms of tracking accuracy, tracking time, and rotation error for different distances using MATLAB R2017b. From the observed results, it is perceived that the proposed method attained the least position error value about 0.1 cm to 0.3 cm. Also, rotation error is observed as minimal between 2.40 (Deg) to 3.10 and its average scale is observed as 2.7140. Further, the proposed combination consumes less time against frames compared with other combinations and obtained a higher tracking accuracy of about 95.14% for 180 tracked points. The witnessed outcomes from the proposed scheme display superior performance compared with existing methods.
无标记增强现实(MAR)是一项卓越的技术,目前被医疗设备组装商用于辅助设计、组装、拆卸和维护操作。医疗装配工根据医生的要求组装医疗设备,并维护设备的质量和卫生。MAR的主要研究挑战是:建立自动配准零件,查找和跟踪零件的方向,缺乏深度和视觉特征。本文提出了一种快速双特征跟踪方法,即视觉同步定位与映射(SLAM)和匹配对选择(MAPSEL)相结合。本工作的主要思想是利用组合方法获得较高的跟踪精度。为了获得良好的深度图像映射,针对深度图像受环境因素动态变化影响而存在噪声的问题,提出了一种基于图的联合双边与锐化滤波器(GRB-JBF with SF)。然后,以定向快速旋转摘要(ORB)作为特征检测器,以梯度直方图(FREAK-HoG)作为特征描述符,利用Rajsk距离进行特征匹配,获得最佳特征点进行匹配。最后,基于三维仿射变换和投影变换对虚拟物体进行渲染。本文使用MATLAB R2017b计算了不同距离下的跟踪精度、跟踪时间和旋转误差的性能。从观测结果可以看出,该方法的位置误差最小,约为0.1 ~ 0.3 cm。此外,旋转误差在2.40(度)至3.10之间最小,其平均尺度为2.7140。此外,与其他组合相比,该组合对帧消耗的时间更少,对180个跟踪点的跟踪精度达到95.14%左右。与现有方法相比,该方案的实测结果显示出更好的性能。
{"title":"A Rapid Dual Feature Tracking Method for Medical Equipments Assembly and Disassembly in Markerless Augmented Reality","authors":"D. Roopa, S. Bose","doi":"10.1166/jmihi.2022.3944","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3944","url":null,"abstract":"Markerless Augmented Reality (MAR) is a superior technology that is currently used by the medical device assembler with aid in design, assembly, disassembly and maintenance operations. The medical assembler assembles the medical equipment based on the doctors requirement, they also\u0000 maintains quality and sanitation of the equipment. The major research challenges in MAR are as follows: establish automatic registration parts, find and track the orientation of parts, and lack of depth and visual features. This work proposes a rapid dual feature tracking method i.e., combination\u0000 of Visual Simultaneous Localization and Mapping (SLAM) and Matched Pairs Selection (MAPSEL). The main idea of this work is to attain high tracking accuracy using the combined method. To get a good depth image map, a Graph-Based Joint Bilateral with Sharpening Filter (GRB-JBF with SF) is proposed\u0000 since depth images are noisy due to the dynamic change of environmental factors that affects tracking accuracy. Then, the best feature points are obtained for matching using Oriented Fast and Rotated Brief (ORB) as a feature detector, Fast Retina Key point with Histogram of Gradients (FREAK-HoG)\u0000 as a feature descriptor, and Feature Matching using Rajsk’s distance. Finally, the virtual object is rendered based on 3D affine and projection transformation. This work computes the performance in terms of tracking accuracy, tracking time, and rotation error for different distances\u0000 using MATLAB R2017b. From the observed results, it is perceived that the proposed method attained the least position error value about 0.1 cm to 0.3 cm. Also, rotation error is observed as minimal between 2.40 (Deg) to 3.10 and its average scale is observed as 2.7140. Further, the proposed\u0000 combination consumes less time against frames compared with other combinations and obtained a higher tracking accuracy of about 95.14% for 180 tracked points. The witnessed outcomes from the proposed scheme display superior performance compared with existing methods.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"93 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78393018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As medical image processing research has progressed, image fusion has emerged as a realistic solution, automatically extracting relevant data from many images before fusing them into a single, unified image. Medical imaging techniques, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), etc., play a crucial role in the diagnosis and classification of brain tumors (BT). A single imaging technique is not sufficient for correct diagnosis of the disease. In case the scans are ambiguous, it can lead doctors to incorrect diagnoses, which can be unsafe to the patient. The solution to this problem is fusing images from different scans containing complementary information to generate accurate images with minimum uncertainty. This research presents a novel method for the automated identification and classification of brain tumors using multi-modal deep learning (AMDL-BTDC). The proposed AMDL-BTDC model initially performs image pre-processing using bilateral filtering (BF) technique. Next, feature vectors are generated using a pair of pre-trained deep learning models called EfficientNet and SqueezeNet. Slime Mold Algorithm is used to acquire the DL models’ optimal hyperparameter settings (SMA). In the end, an autoencoder (AE) model is used for BT classification once features have been fused. The suggested model’s superior performance over other techniques under diverse measures was validated by extensive testing on the benchmark medical imaging dataset.
{"title":"Automated Multimodal Fusion Based Hyperparameter Tuned Deep Learning Model for Brain Tumor Diagnosis","authors":"S. Sandhya, M. Senthil Kumar","doi":"10.1166/jmihi.2022.3942","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3942","url":null,"abstract":"As medical image processing research has progressed, image fusion has emerged as a realistic solution, automatically extracting relevant data from many images before fusing them into a single, unified image. Medical imaging techniques, such as Computed Tomography (CT), Magnetic Resonance\u0000 Imaging (MRI), etc., play a crucial role in the diagnosis and classification of brain tumors (BT). A single imaging technique is not sufficient for correct diagnosis of the disease. In case the scans are ambiguous, it can lead doctors to incorrect diagnoses, which can be unsafe to the patient.\u0000 The solution to this problem is fusing images from different scans containing complementary information to generate accurate images with minimum uncertainty. This research presents a novel method for the automated identification and classification of brain tumors using multi-modal deep learning\u0000 (AMDL-BTDC). The proposed AMDL-BTDC model initially performs image pre-processing using bilateral filtering (BF) technique. Next, feature vectors are generated using a pair of pre-trained deep learning models called EfficientNet and SqueezeNet. Slime Mold Algorithm is used to acquire the DL\u0000 models’ optimal hyperparameter settings (SMA). In the end, an autoencoder (AE) model is used for BT classification once features have been fused. The suggested model’s superior performance over other techniques under diverse measures was validated by extensive testing on the benchmark\u0000 medical imaging dataset.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"27 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91220021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain Tumour is a one of the most threatful disease in the world. It reduces the life span of human beings. Computer vision is advantageous for human health research because it eliminates the need for human judgement to get accurate data. The most reliable and secure imaging techniques for magnetic resonance imaging are CT scans, X-rays, and MRI scans (MRI). MRI can locate tiny objects. The focus of our paper will be the many techniques for detecting brain cancer using brain MRI. Early detection of tumour and diagnosis is might essential to radiologist to initiate better treatment. MRI is a competent and speedy method of examining a brain tumour. Resonance in Magnetic Fields Imaging technology is a non-invasive technique that aids in the segmentation of brain tumour images. Deep learning algorithm delivers good outcomes in terms of reducing time consumption and precise tumour diagnosis (solution). This research proposed that a Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) Supervised Deep Learning model be used to automatically find and split brain tumours. The RNN Model outperforms the CNN Model by 98.91 percentage. These models categorize brain images as normal or pathological, and their performance was evaluated.
{"title":"Recurrent Neural Network Deep Learning Techniques for Brain Tumor Segmentation and Classification of Magnetic Resonance Imaging Images","authors":"Meenal Thayumanavan, Asokan Ramasamy","doi":"10.1166/jmihi.2022.3943","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3943","url":null,"abstract":"Brain Tumour is a one of the most threatful disease in the world. It reduces the life span of human beings. Computer vision is advantageous for human health research because it eliminates the need for human judgement to get accurate data. The most reliable and secure imaging techniques\u0000 for magnetic resonance imaging are CT scans, X-rays, and MRI scans (MRI). MRI can locate tiny objects. The focus of our paper will be the many techniques for detecting brain cancer using brain MRI. Early detection of tumour and diagnosis is might essential to radiologist to initiate better\u0000 treatment. MRI is a competent and speedy method of examining a brain tumour. Resonance in Magnetic Fields Imaging technology is a non-invasive technique that aids in the segmentation of brain tumour images. Deep learning algorithm delivers good outcomes in terms of reducing time consumption\u0000 and precise tumour diagnosis (solution). This research proposed that a Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) Supervised Deep Learning model be used to automatically find and split brain tumours. The RNN Model outperforms the CNN Model by 98.91 percentage. These\u0000 models categorize brain images as normal or pathological, and their performance was evaluated.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"50 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88924038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Pandikumar, K. Senthamil Selvan, B. Sowmya, A. Niranjil Kumar
Facial expression recognition has been more essential in artificial machine intelligence systems in recent years. Recognizing facial expressions automatically has constantly been considered as a challenging task since people significantly vary the way of exhibiting their facial expressions. Numerous researchers established diverse approaches to analyze the facial expressions automatically but there arise few imprecision issues during facial recognition. To address such shortcomings, our proposed approach recognizes the facial expressions of humans in an effective manner. The suggested method is divided into three stages: pre-processing, feature extraction, and classification. The inputs are pre-processed at the initial stage and CNN-BO algorithm is used to extract the best feature in the feature extraction step. Then the extracted feature is provided to the classification stage where MNN-SR algorithm is employed in classifying the face expression as joyful, miserable, normal, annoyance, astonished and frightened. Also, the parameters are tuned effectively to obtain high recognition accuracy. In addition to this, the performances of the proposed approach are computed by employing three various datasets namely; CMU/VASC, Caltech faces 1999, JAFFE and XM2VTS. The performance of the proposed system is calculated and comparative analysis is made with few other existing approaches and its concluded that the proposed method provides superior performance with optimal recognition rate.
{"title":"Convolutional Neural Network-BO Based Feature Extraction and Multi-Layer Neural Network-SR Based Classification for Facial Expression Recognition","authors":"K. Pandikumar, K. Senthamil Selvan, B. Sowmya, A. Niranjil Kumar","doi":"10.1166/jmihi.2022.3938","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3938","url":null,"abstract":"Facial expression recognition has been more essential in artificial machine intelligence systems in recent years. Recognizing facial expressions automatically has constantly been considered as a challenging task since people significantly vary the way of exhibiting their facial expressions.\u0000 Numerous researchers established diverse approaches to analyze the facial expressions automatically but there arise few imprecision issues during facial recognition. To address such shortcomings, our proposed approach recognizes the facial expressions of humans in an effective manner. The\u0000 suggested method is divided into three stages: pre-processing, feature extraction, and classification. The inputs are pre-processed at the initial stage and CNN-BO algorithm is used to extract the best feature in the feature extraction step. Then the extracted feature is provided to the classification\u0000 stage where MNN-SR algorithm is employed in classifying the face expression as joyful, miserable, normal, annoyance, astonished and frightened. Also, the parameters are tuned effectively to obtain high recognition accuracy. In addition to this, the performances of the proposed approach are\u0000 computed by employing three various datasets namely; CMU/VASC, Caltech faces 1999, JAFFE and XM2VTS. The performance of the proposed system is calculated and comparative analysis is made with few other existing approaches and its concluded that the proposed method provides superior\u0000 performance with optimal recognition rate.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88721680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a time series data mining models is introduced for analysis of ECG data for prior identification of heart attacks. The ECG data sets extracted from Physionet are simulated in MATLAB. The Data used for model are preprocessed so that missing data are fulfilled. In this work cascade feedforward NN which is similar to Multilayer Perceptron (MLP) architecture is proposed along with Swarm Intelligence. A hybrid method combining cascade-Forward NN Classifier and Ant colony optimization is proposed in this paper. The swarm-based intelligence method optimizes the weight adjustment of neural network and enhances the convergence behavior. The novelty is on the optimization of the NN parameters for narrowing down the convergence with ACO implementation. Ant colony optimization is used here for choosing the optimized hidden node. The combined use of machine learning algorithm with neural network enhances the performance of the system. The performance is evaluated using parameters like True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) respectively. The Improved accuracy of proposed Classifier model raises the speed. In addition, the proposed method uses minimum memory. The implementation was done in MATLAB tool. Real time data was used.
{"title":"Deep Learning-Based Electrocardiogram Signal Analysis for Abnormalities Detection Using Hybrid Cascade Feed Forward Backpropagation with Ant Colony Optimization Technique","authors":"C. Ganesh, B. Sathiyabhama","doi":"10.1166/jmihi.2022.3945","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3945","url":null,"abstract":"In this paper, a time series data mining models is introduced for analysis of ECG data for prior identification of heart attacks. The ECG data sets extracted from Physionet are simulated in MATLAB. The Data used for model are preprocessed so that missing data are fulfilled. In this\u0000 work cascade feedforward NN which is similar to Multilayer Perceptron (MLP) architecture is proposed along with Swarm Intelligence. A hybrid method combining cascade-Forward NN Classifier and Ant colony optimization is proposed in this paper. The swarm-based intelligence method optimizes the\u0000 weight adjustment of neural network and enhances the convergence behavior. The novelty is on the optimization of the NN parameters for narrowing down the convergence with ACO implementation. Ant colony optimization is used here for choosing the optimized hidden node. The combined use of machine\u0000 learning algorithm with neural network enhances the performance of the system. The performance is evaluated using parameters like True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) respectively. The Improved accuracy of proposed Classifier model raises the\u0000 speed. In addition, the proposed method uses minimum memory. The implementation was done in MATLAB tool. Real time data was used.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80710322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In practical radiology, early diagnosis and precise categorization of liver cancer are difficult issues. Manual segmentation is also a time-consuming process. So, utilizing various methodologies based on an embedded system, we detect liver cancer from abdominal CT images using automated liver cancer segmentation and classification. The objective is to categorize CT scan images of primary and secondary liver disease using a Back Propagation Neural Network (BPNN) classifier, which has greater accuracy than previous approaches. In this work, a newly proposed method is shown which has four phases: image preprocessing, image segmentation, extraction of the features, and classification of the liver. Level set segmentation for segmenting the liver from abdominal CT images and Practical Swarm Optimization (PSO) for the tumor segmentation. Then the features from the liver are extracted and given to the BPNN classifier to classify the liver cancer. These algorithms are implemented on the Raspberry Pi. Then it serially interfaces with the MAX3232 protocol via serial communication. The GSM 800C module is connected to the system to send SMS as primary or secondary cancer. The BPNN classification technique achieved an excellent accuracy of 97.98%. The experimental results demonstrate the efficiency of this proposed approach, which provides excellent accuracy with good results.
{"title":"Liver Cancer Detection and Classification Using Raspberry Pi","authors":"T. K. R. Agita, M. Moorthi","doi":"10.1166/jmihi.2022.3941","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3941","url":null,"abstract":"In practical radiology, early diagnosis and precise categorization of liver cancer are difficult issues. Manual segmentation is also a time-consuming process. So, utilizing various methodologies based on an embedded system, we detect liver cancer from abdominal CT images using automated\u0000 liver cancer segmentation and classification. The objective is to categorize CT scan images of primary and secondary liver disease using a Back Propagation Neural Network (BPNN) classifier, which has greater accuracy than previous approaches. In this work, a newly proposed method is shown\u0000 which has four phases: image preprocessing, image segmentation, extraction of the features, and classification of the liver. Level set segmentation for segmenting the liver from abdominal CT images and Practical Swarm Optimization (PSO) for the tumor segmentation. Then the features from the\u0000 liver are extracted and given to the BPNN classifier to classify the liver cancer. These algorithms are implemented on the Raspberry Pi. Then it serially interfaces with the MAX3232 protocol via serial communication. The GSM 800C module is connected to the system to send SMS as primary or\u0000 secondary cancer. The BPNN classification technique achieved an excellent accuracy of 97.98%. The experimental results demonstrate the efficiency of this proposed approach, which provides excellent accuracy with good results.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"27 4 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77983768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diabetes causes damage to the retinal blood vessel networks, resulting in Diabetic Retinopathy (DR). This is a serious vision-threatening condition for most diabetics. Color fundus photographs are utilized to diagnose DR, which necessitates the employment of qualified clinicians to detect the presence of lesions. It is difficult to identify DR in an automated method. Feature extraction is quite important in terms of automated sickness detection. Convolutional Neural Network (CNN) exceeds previous handcrafted feature-based image classification algorithms in terms of picture classification efficiency in the current environment. In order to improve classification accuracy, this work presents the CNN structure for extracting attributes from retinal fundus images. The output properties of CNN are given as input to different machine learning classifiers in this recommended strategy. This approach is evaluating using pictures from the EYEPACS datasets using Decision stump, J48 and Random Forest classifiers. To determine the effectiveness of a classifier, its accuracy, false positive rate (FPR), True positive Rate (TPR), precision, recall, F-measure, and Kappa-score are illustrated. The recommended feature extraction strategy paired with the Random forest classifier outperforms all other classifiers on the EYEPACS datasets, with average accuracy and Kappa-score (k-score) of 99% and 0.98 respectively.
{"title":"Classification of Fundus Images Using Convolutional Neural Networks","authors":"R. Sabitha, G. Ramani","doi":"10.1166/jmihi.2022.3947","DOIUrl":"https://doi.org/10.1166/jmihi.2022.3947","url":null,"abstract":"Diabetes causes damage to the retinal blood vessel networks, resulting in Diabetic Retinopathy (DR). This is a serious vision-threatening condition for most diabetics. Color fundus photographs are utilized to diagnose DR, which necessitates the employment of qualified clinicians to\u0000 detect the presence of lesions. It is difficult to identify DR in an automated method. Feature extraction is quite important in terms of automated sickness detection. Convolutional Neural Network (CNN) exceeds previous handcrafted feature-based image classification algorithms in terms of picture\u0000 classification efficiency in the current environment. In order to improve classification accuracy, this work presents the CNN structure for extracting attributes from retinal fundus images. The output properties of CNN are given as input to different machine learning classifiers in this recommended\u0000 strategy. This approach is evaluating using pictures from the EYEPACS datasets using Decision stump, J48 and Random Forest classifiers. To determine the effectiveness of a classifier, its accuracy, false positive rate (FPR), True positive Rate (TPR), precision, recall, F-measure, and Kappa-score\u0000 are illustrated. The recommended feature extraction strategy paired with the Random forest classifier outperforms all other classifiers on the EYEPACS datasets, with average accuracy and Kappa-score (k-score) of 99% and 0.98 respectively.","PeriodicalId":49032,"journal":{"name":"Journal of Medical Imaging and Health Informatics","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90018205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}