Background: Diffusion-weighted imaging (DWI) may not always provide positive results for acute ischemic stroke diagnosis (AIS). In the present study, we aim to identify risk factors that affect the frequency of inconsistent DWI results in patients with AIS. Methods: A total of 212 patients diagnosed with AIS underwent DWI at the time of hospital admission and 24 hours after AIS was diagnosed. According to the outcome of the two DWI results, patients were classfied into the inconsistent group (negative for initial scan, but positive for second scan) and the consistent group (negative or positive for both scans). A number of parameters were compared between the two patient groups, including demographic characteristics, disease history, imaging time, cause of stroke and NIHSS score at admission. Univariate and multivariate analysis were employed to predict the independent risk factors for inconsistent DWI results. Results: We found that prior stroke experience, time of initial DWI scan prior to the diagnosis of AIS (also referred as DWI latency) and time between the first and second DWI were all significantly different between the two patient groups. All 3 factors were also identified as independent risk factors for the inconsistent DWI results. In addition, probability of DWI latency shows an increasing trend in a time-dependent manner up to 3 hours. Conclusion: Our data indicate that DWI should be performed within three hours since hospital admission and repeated within 24 hours after AIS is diagnosed, especially for the patients that showed negative results in the initial scan.
{"title":"Risk Factors That Affect Diffusion-Weighted Imaging Results on Patients with Acute Ischemic Stroke: A Retrospective Analysis","authors":"Kangyi Pan, Y. Shen, Huaping Sun","doi":"10.1166/jmihi.2021.3920","DOIUrl":"https://doi.org/10.1166/jmihi.2021.3920","url":null,"abstract":"Background: Diffusion-weighted imaging (DWI) may not always provide positive results for acute ischemic stroke diagnosis (AIS). In the present study, we aim to identify risk factors that affect the frequency of inconsistent DWI results in patients with AIS. Methods: A\u0000 total of 212 patients diagnosed with AIS underwent DWI at the time of hospital admission and 24 hours after AIS was diagnosed. According to the outcome of the two DWI results, patients were classfied into the inconsistent group (negative for initial scan, but positive for second scan) and\u0000 the consistent group (negative or positive for both scans). A number of parameters were compared between the two patient groups, including demographic characteristics, disease history, imaging time, cause of stroke and NIHSS score at admission. Univariate and multivariate analysis were employed\u0000 to predict the independent risk factors for inconsistent DWI results. Results: We found that prior stroke experience, time of initial DWI scan prior to the diagnosis of AIS (also referred as DWI latency) and time between the first and second DWI were all significantly different between\u0000 the two patient groups. All 3 factors were also identified as independent risk factors for the inconsistent DWI results. In addition, probability of DWI latency shows an increasing trend in a time-dependent manner up to 3 hours. Conclusion: Our data indicate that DWI should be performed\u0000 within three hours since hospital admission and repeated within 24 hours after AIS is diagnosed, especially for the patients that showed negative results in the initial scan.","PeriodicalId":393031,"journal":{"name":"J. Medical Imaging Health Informatics","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130566912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prediction of occurrence of a seizure would be of greater help to make necessary precaution for taking care of the patient. A Deep learning model, recurrent neural network (RNN), is designed for predicting the upcoming values in the EEG values. A deep data analysis is made to find the parameter that could best differentiate the normal values and seizure values. Next a recurrent neural network model is built for predicting the values earlier. Four different variants of recurrent neural networks are designed in terms of number of time stamps and the number of LSTM layers and the best model is identified. The best identified RNN model is used for predicting the values. The performance of the model is evaluated in terms of explained variance score and R2 score. The model founds to perform well number of elements in the test dataset is minimal and so this model can predict the seizure values only a few seconds earlier.
{"title":"Deep Learning Model for Epileptic Seizure Prediction","authors":"K. Ganapriya, N. Maheswari, R. Venkatesh","doi":"10.1166/jmihi.2021.3916","DOIUrl":"https://doi.org/10.1166/jmihi.2021.3916","url":null,"abstract":"Prediction of occurrence of a seizure would be of greater help to make necessary precaution for taking care of the patient. A Deep learning model, recurrent neural network (RNN), is designed for predicting the upcoming values in the EEG values. A deep data analysis is made to find the\u0000 parameter that could best differentiate the normal values and seizure values. Next a recurrent neural network model is built for predicting the values earlier. Four different variants of recurrent neural networks are designed in terms of number of time stamps and the number of LSTM layers\u0000 and the best model is identified. The best identified RNN model is used for predicting the values. The performance of the model is evaluated in terms of explained variance score and R2 score. The model founds to perform well number of elements in the test dataset is minimal\u0000 and so this model can predict the seizure values only a few seconds earlier.","PeriodicalId":393031,"journal":{"name":"J. Medical Imaging Health Informatics","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131973680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The spine tumor is a fast-growing abnormal cell in the spinal canal or vertebrae of the spine, it affected many people. Thousands of researchers have focused on this disease for better understanding of tumor classification to provide more effective treatment to the patients. The main objective of this paper is to form a methodology for classification of spine image. We proposed an efficient and effective method that helpful for classifying the spine image and identified tumor region without any human assistance. Basically, Contrast Limited Adaptive Histogram Equalization used to improve the contrast of spine images and to eliminate the effect of unwanted noise. The proposed methodology will classify spine images as Normal or Abnormal using Convolutional Neural Network (CNN) model algorithm. The CNN model can classify spine image as Normal or Abnormal with 99.4% Accuracy, 94.5% Sensitivity, 95.6% Precision, and 99.9% specificity. Compared with the previous existing methods, our proposed solution achieved the highest performance in terms of classification based on the spine dataset. From the experimental results performed on the different images, it is clear that the analysis for the spine tumor detection is fast and accurate when compared with the manual detection performed by radiologists or clinical experts, So, anyone can easily identify the tumor affected area also determine abnormal images.
{"title":"Classification of Spine Image from MRI Image Using Convolutional Neural Network","authors":"G. Raja, J. Mohan","doi":"10.1166/jmihi.2021.3890","DOIUrl":"https://doi.org/10.1166/jmihi.2021.3890","url":null,"abstract":"The spine tumor is a fast-growing abnormal cell in the spinal canal or vertebrae of the spine, it affected many people. Thousands of researchers have focused on this disease for better understanding of tumor classification to provide more effective treatment to the patients. The main\u0000 objective of this paper is to form a methodology for classification of spine image. We proposed an efficient and effective method that helpful for classifying the spine image and identified tumor region without any human assistance. Basically, Contrast Limited Adaptive Histogram Equalization\u0000 used to improve the contrast of spine images and to eliminate the effect of unwanted noise. The proposed methodology will classify spine images as Normal or Abnormal using Convolutional Neural Network (CNN) model algorithm. The CNN model can classify spine image as Normal or Abnormal with\u0000 99.4% Accuracy, 94.5% Sensitivity, 95.6% Precision, and 99.9% specificity. Compared with the previous existing methods, our proposed solution achieved the highest performance in terms of classification based on the spine dataset. From the experimental results performed on the different images,\u0000 it is clear that the analysis for the spine tumor detection is fast and accurate when compared with the manual detection performed by radiologists or clinical experts, So, anyone can easily identify the tumor affected area also determine abnormal images.","PeriodicalId":393031,"journal":{"name":"J. Medical Imaging Health Informatics","volume":"2673 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125163769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The image segmentation of any irregular pixels in Glioma brain image can be considered as difficult. There is a smaller difference between the pixel intensity of both tumor and non-tumor images. The proposed method stated that Glioma brain tumor is detected in brain MRI image by utilizing image fusion based Co-Active Adaptive Neuro Fuzzy Inference System (CANFIS) categorization technique. The low resolution brain image pixels are improved by contrast through image fusion method. This paper uses two different wavelet transforms such as, Discrete and Stationary for fusing two brain images for enhancing the internal regions. The pixels in contrast enhanced image is transformed into multi scale, multi frequency and orientation format through Gabor transform approach. The linear features can be obtained from this Gabor transformed brain image and it is being used to distinguish the non-tumor Glioma brain image from the tumor affected brain image through CANFIS method in this paper. The feature extraction and its impacts are being assigned on the proposed Glioma detection method is also examined in terms of detection rate. Then, morphological operations are involved on the resultant of classified Glioma brain image used to address and segment the tumor portions. The proposed system performance is analyzed with respect to various segmentation approaches. The proposed work simulation results can be compared with different state-of-the art techniques with respect to various parameter metrics and detection rate.
{"title":"An Efficient Framework for the Segmentation of Glioma Brain Tumor Using Image Fusion and Co-Active Adaptive Neuro Fuzzy Inference System Classification Method","authors":"C. Moorthy, K. A. Britto","doi":"10.1166/jmihi.2021.3915","DOIUrl":"https://doi.org/10.1166/jmihi.2021.3915","url":null,"abstract":"The image segmentation of any irregular pixels in Glioma brain image can be considered as difficult. There is a smaller difference between the pixel intensity of both tumor and non-tumor images. The proposed method stated that Glioma brain tumor is detected in brain MRI image by utilizing\u0000 image fusion based Co-Active Adaptive Neuro Fuzzy Inference System (CANFIS) categorization technique. The low resolution brain image pixels are improved by contrast through image fusion method. This paper uses two different wavelet transforms such as, Discrete and Stationary for fusing two\u0000 brain images for enhancing the internal regions. The pixels in contrast enhanced image is transformed into multi scale, multi frequency and orientation format through Gabor transform approach. The linear features can be obtained from this Gabor transformed brain image and it is being used\u0000 to distinguish the non-tumor Glioma brain image from the tumor affected brain image through CANFIS method in this paper. The feature extraction and its impacts are being assigned on the proposed Glioma detection method is also examined in terms of detection rate. Then, morphological operations\u0000 are involved on the resultant of classified Glioma brain image used to address and segment the tumor portions. The proposed system performance is analyzed with respect to various segmentation approaches. The proposed work simulation results can be compared with different state-of-the art techniques\u0000 with respect to various parameter metrics and detection rate.","PeriodicalId":393031,"journal":{"name":"J. Medical Imaging Health Informatics","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126611658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jansi Rani Sella Veluswami, M. E. Prasanth, K. Harini, U. Ajaykumar
Melanoma skin cancer is a common disease that develops in the melanocytes that produces melanin. In this work, a deep hybrid learning model is engaged to distinguish the skin cancer and classify them. The dataset used contains two classes of skin cancer–benign and malignant. Since the dataset is imbalanced between the number of images in malignant lesions and benign lesions, augmentation technique is used to balance it. To improve the clarity of the images, the images are then enhanced using Contrast Limited Adaptive Histogram Equalization Technique (CLAHE) technique. To detect only the affected lesion area, the lesions are segmented using the neural network based ensemble model which is the result of combining the segmentation algorithms of Fully Convolutional Network (FCN), SegNet and U-Net which produces a binary image of the skin and the lesion, where the lesion is represented with white and the skin is represented by black. These binary images are further classified using different pre-trained models like Inception ResNet V2, Inception V3, Resnet 50, Densenet and CNN. Following that fine tuning of the best performing pre-trained model is carried out to improve the performance of classification. To further improve the performance of the classification model, a method of combining deep learning (DL) and machine learning (ML) is carried out. Using this hybrid approach, the feature extraction is done using DL models and the classification is performed by Support Vector Machine (SVM). This computer aided tool will assist doctors in diagnosing the disease faster than the traditional method. There is a significant improvement of nearly 4% increase in the performance of the proposed method is presented.
{"title":"Melanoma Skin Cancer Recognition and Classification Using Deep Hybrid Learning","authors":"Jansi Rani Sella Veluswami, M. E. Prasanth, K. Harini, U. Ajaykumar","doi":"10.1166/jmihi.2021.3898","DOIUrl":"https://doi.org/10.1166/jmihi.2021.3898","url":null,"abstract":"Melanoma skin cancer is a common disease that develops in the melanocytes that produces melanin. In this work, a deep hybrid learning model is engaged to distinguish the skin cancer and classify them. The dataset used contains two classes of skin cancer–benign and malignant. Since\u0000 the dataset is imbalanced between the number of images in malignant lesions and benign lesions, augmentation technique is used to balance it. To improve the clarity of the images, the images are then enhanced using Contrast Limited Adaptive Histogram Equalization Technique (CLAHE) technique.\u0000 To detect only the affected lesion area, the lesions are segmented using the neural network based ensemble model which is the result of combining the segmentation algorithms of Fully Convolutional Network (FCN), SegNet and U-Net which produces a binary image of the skin and the lesion, where\u0000 the lesion is represented with white and the skin is represented by black. These binary images are further classified using different pre-trained models like Inception ResNet V2, Inception V3, Resnet 50, Densenet and CNN. Following that fine tuning of the best performing pre-trained model\u0000 is carried out to improve the performance of classification. To further improve the performance of the classification model, a method of combining deep learning (DL) and machine learning (ML) is carried out. Using this hybrid approach, the feature extraction is done using DL models and the\u0000 classification is performed by Support Vector Machine (SVM). This computer aided tool will assist doctors in diagnosing the disease faster than the traditional method. There is a significant improvement of nearly 4% increase in the performance of the proposed method is presented.","PeriodicalId":393031,"journal":{"name":"J. Medical Imaging Health Informatics","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128927107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The actions of humans executed by their hands play a remarkable part in controlling and handling variety of objects in their daily life activities. The effect of losing or degradation in the functioning of one hand has a greater influence in bringing down the regular activity. Hence the design of prosthetic hands which assists the individuals to enhance their regular activity seems a better remedy in this new era. This paper puts forward a classification framework using machine learning algorithms for classifying hand gesture signals. The surface electromyography (sEMG) dataset acquired for 9 wrist movements of publicly available database are utilized to identify the potential biomarkers for classification and in evaluating the efficacy of the proposed algorithm. The statistical and time domain features of the sEMG signals from 27 intact subjects and 11 trans-radial amputated subjects are extracted and the optimal features are determined implementing the feature selection approach based on correlation factor. The classifiers performance of machine learning algorithms namely support vector machine (SVM), Naïve bayes (NB) and Ensemble classifier are evaluated. The experimental results highlight that the SVM classifier can yield the maximum accuracy movement classification of 99.6% for intact and 97.56% for trans-amputee subjects. The proposed approach offers better accuracy and sensitivity compared to other approaches that have used the sEMG dataset for movement classification.
{"title":"Gesture Classification of Surface Electromyography Signals Using Machine Learning Algorithms for Hand Prosthetics","authors":"N. Subhashini, A. Kandaswamy","doi":"10.1166/jmihi.2021.3907","DOIUrl":"https://doi.org/10.1166/jmihi.2021.3907","url":null,"abstract":"The actions of humans executed by their hands play a remarkable part in controlling and handling variety of objects in their daily life activities. The effect of losing or degradation in the functioning of one hand has a greater influence in bringing down the regular activity. Hence\u0000 the design of prosthetic hands which assists the individuals to enhance their regular activity seems a better remedy in this new era. This paper puts forward a classification framework using machine learning algorithms for classifying hand gesture signals. The surface electromyography (sEMG)\u0000 dataset acquired for 9 wrist movements of publicly available database are utilized to identify the potential biomarkers for classification and in evaluating the efficacy of the proposed algorithm. The statistical and time domain features of the sEMG signals from 27 intact subjects and 11 trans-radial\u0000 amputated subjects are extracted and the optimal features are determined implementing the feature selection approach based on correlation factor. The classifiers performance of machine learning algorithms namely support vector machine (SVM), Naïve bayes (NB) and Ensemble classifier are\u0000 evaluated. The experimental results highlight that the SVM classifier can yield the maximum accuracy movement classification of 99.6% for intact and 97.56% for trans-amputee subjects. The proposed approach offers better accuracy and sensitivity compared to other approaches that have used the\u0000 sEMG dataset for movement classification.","PeriodicalId":393031,"journal":{"name":"J. Medical Imaging Health Informatics","volume":"162 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127353965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breast cancer can be detected using early signs of it mammograms and digital mammography. For Computer Aided Detection (CAD), algorithms can be developed using this opportunities. Early detection is assisted by self-test and periodical check-ups and it can enhance the survival chance significantly. Due the need of breast cancer’s early detection and false diagnosis impact on patients, made researchers to investigate Deep Learning (DL) techniques for mammograms. So, it requires a non-invasive cancer detection system, which is highly effective, accurate, fast as well as robust. Proposed work has three steps, (i) Pre-processing, (ii) Segmentation, and (iii) Classification. Firstly, preprocessing stage removing noise from images by using mean and median filtering algorithms are used, while keeping its features intact for better understanding and recognition, then edge detection by using canny edge detector. It uses Gaussian filter for smoothening image. Gaussian smoothening is used for enhancing image analysis process quality, result in blurring of fine-scaled image edges. In the next stage, image representation is changed into something, which makes analyses process as a simple one. Foreground and background subtraction is used for accurate breast image detection in segmentation. After completion of segmentation stage, the remove unwanted image in input image dataset. Finally, a novel RNN forclassifying and detecting breast cancer using Auto Encoder (AE) based RNN for feature extraction by integrating Animal Migration Optimization (AMO) for tuning the parameters of RNN model, then softmax classifier use RNN algorithm. Experimental results are conducted using Mini-Mammographic (MIAS) dataset of breast cancer. The classifiers are measured through measures like precision, recall, f-measure and accuracy.
{"title":"An Early Breast Cancer Detection System Using Recurrent Neural Network (RNN) with Animal Migration Optimization (AMO) Based Classification Method","authors":"S. Prakash, K. Sangeetha","doi":"10.1166/jmihi.2021.3885","DOIUrl":"https://doi.org/10.1166/jmihi.2021.3885","url":null,"abstract":"Breast cancer can be detected using early signs of it mammograms and digital mammography. For Computer Aided Detection (CAD), algorithms can be developed using this opportunities. Early detection is assisted by self-test and periodical check-ups and it can enhance the survival chance\u0000 significantly. Due the need of breast cancer’s early detection and false diagnosis impact on patients, made researchers to investigate Deep Learning (DL) techniques for mammograms. So, it requires a non-invasive cancer detection system, which is highly effective, accurate, fast as well\u0000 as robust. Proposed work has three steps, (i) Pre-processing, (ii) Segmentation, and (iii) Classification. Firstly, preprocessing stage removing noise from images by using mean and median filtering algorithms are used, while keeping its features intact for better understanding and recognition,\u0000 then edge detection by using canny edge detector. It uses Gaussian filter for smoothening image. Gaussian smoothening is used for enhancing image analysis process quality, result in blurring of fine-scaled image edges. In the next stage, image representation is changed into something, which\u0000 makes analyses process as a simple one. Foreground and background subtraction is used for accurate breast image detection in segmentation. After completion of segmentation stage, the remove unwanted image in input image dataset. Finally, a novel RNN forclassifying and detecting breast cancer\u0000 using Auto Encoder (AE) based RNN for feature extraction by integrating Animal Migration Optimization (AMO) for tuning the parameters of RNN model, then softmax classifier use RNN algorithm. Experimental results are conducted using Mini-Mammographic (MIAS) dataset of breast cancer. The classifiers\u0000 are measured through measures like precision, recall, f-measure and accuracy.","PeriodicalId":393031,"journal":{"name":"J. Medical Imaging Health Informatics","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129032474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The field of nanotechnology has lately acquired prominence according to the raised level of correct identification and performance in the patients using Computer-Aided Diagnosis (CAD). Nano-scale imaging model enables for a high level of precision and accuracy in determining if a brain tumour is malignant or benign. This contributes to people with brain tumours having a better standard of living. In this study, We present a revolutionary Semantic nano-segmentation methodology for the nanoscale classification of brain tumours. The suggested Advanced-Convolutional Neural Networks-based Semantic Nano-segmentation will aid radiologists in detecting brain tumours even when lesions are minor. ResNet-50 was employed in the suggested Advanced-Convolutional Neural Networks (A-CNN) approach. The tumour image is partitioned using Semantic Nano-segmentation, that has averaged dice and SSIM values of 0.9704 and 0.2133, correspondingly. The input is a nano-image, and the tumour image is segmented using Semantic Nano-segmentation, which has averaged dice and SSIM values of 0.9704 and 0.2133, respectively. The suggested Semantic nano segments achieves 93.2 percent and 92.7 percent accuracy for benign and malignant tumour pictures, correspondingly. For malignant or benign pictures, The accuracy of the A-CNN methodology of correct segmentation is 99.57 percent and 95.7 percent, respectively. This unique nano-method is designed to detect tumour areas in nanometers (nm) and hence accurately assess the illness. The suggested technique’s closeness to with regard to True Positive values, the ROC curve implies that it outperforms earlier approaches. A comparison analysis is conducted on ResNet-50 using testing and training data at rates of 90%–10%, 80%–20%, and 70%–30%, corresponding, indicating the utility of the suggested work.
{"title":"Diagnosis of Brain Tumor Using Nano Segmentation and Advanced-Convolutional Neural Networks Classification","authors":"P. Deepa, S. Jawhar, J. M. Geisa","doi":"10.1166/jmihi.2021.3891","DOIUrl":"https://doi.org/10.1166/jmihi.2021.3891","url":null,"abstract":"The field of nanotechnology has lately acquired prominence according to the raised level of correct identification and performance in the patients using Computer-Aided Diagnosis (CAD). Nano-scale imaging model enables for a high level of precision and accuracy in determining if a brain\u0000 tumour is malignant or benign. This contributes to people with brain tumours having a better standard of living. In this study, We present a revolutionary Semantic nano-segmentation methodology for the nanoscale classification of brain tumours. The suggested Advanced-Convolutional Neural Networks-based\u0000 Semantic Nano-segmentation will aid radiologists in detecting brain tumours even when lesions are minor. ResNet-50 was employed in the suggested Advanced-Convolutional Neural Networks (A-CNN) approach. The tumour image is partitioned using Semantic Nano-segmentation, that has averaged dice\u0000 and SSIM values of 0.9704 and 0.2133, correspondingly. The input is a nano-image, and the tumour image is segmented using Semantic Nano-segmentation, which has averaged dice and SSIM values of 0.9704 and 0.2133, respectively. The suggested Semantic nano segments achieves 93.2 percent and 92.7\u0000 percent accuracy for benign and malignant tumour pictures, correspondingly. For malignant or benign pictures, The accuracy of the A-CNN methodology of correct segmentation is 99.57 percent and 95.7 percent, respectively. This unique nano-method is designed to detect tumour areas in nanometers\u0000 (nm) and hence accurately assess the illness. The suggested technique’s closeness to with regard to True Positive values, the ROC curve implies that it outperforms earlier approaches. A comparison analysis is conducted on ResNet-50 using testing and training data at rates of 90%–10%,\u0000 80%–20%, and 70%–30%, corresponding, indicating the utility of the suggested work.","PeriodicalId":393031,"journal":{"name":"J. Medical Imaging Health Informatics","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129109547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Human brain can be viewed using MRI images. These images will be useful for physicians, only if their quality is good. We propose a new method called, Contourlet Based Two Stage Adaptive Histogram Equalization (CBTSA), that uses Nonsubsampled Contourlet Transform (NSCT) for smoothing images and adaptive histogram equalization (AHE), under two occasions, called stages, for enhancement of the low contrast MRI images. The given MRI image is fragmented into equal sized sub-images and NSCT is applied to each of the sub-images. AHE is imposed on each resultant sub-image. All processed images are merged and AHE is applied again to the merged image. The clarity of the output image obtained by our method has outperformed the output image produced by traditional methods. The quality was measured and compared using criteria like, Entropy, Absolute Mean Brightness Error (AMBE) and Peak Signal to Noise Ratio (PSNR).
{"title":"Enhancing MRI Brain Images Using Contourlet Transform and Adaptive Histogram Equalization","authors":"J. Murugachandravel, S. Anand","doi":"10.1166/jmihi.2021.3906","DOIUrl":"https://doi.org/10.1166/jmihi.2021.3906","url":null,"abstract":"Human brain can be viewed using MRI images. These images will be useful for physicians, only if their quality is good. We propose a new method called, Contourlet Based Two Stage Adaptive Histogram Equalization (CBTSA), that uses Nonsubsampled Contourlet Transform (NSCT)\u0000 for smoothing images and adaptive histogram equalization (AHE), under two occasions, called stages, for enhancement of the low contrast MRI images. The given MRI image is fragmented into equal sized sub-images and NSCT is applied to each of the sub-images. AHE is imposed on each resultant\u0000 sub-image. All processed images are merged and AHE is applied again to the merged image. The clarity of the output image obtained by our method has outperformed the output image produced by traditional methods. The quality was measured and compared using criteria like, Entropy, Absolute Mean\u0000 Brightness Error (AMBE) and Peak Signal to Noise Ratio (PSNR).","PeriodicalId":393031,"journal":{"name":"J. Medical Imaging Health Informatics","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115415757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Implantable biomedical systems that enable the majority of the functions of wireless implantable devices have made significant progress in recent years. Nonetheless, due to limited miniaturization, power distribution limits, and the unavailability of a stable link between implants and external devices, such systems are primarily limited to investigation. Generating electricity from natural sources and human body movement for implantable biomedical devices has emerged as a viable option. Nowadays, energy sources become the emerging use of electricity grid which has formed new challenges for the effectiveness of power quality, efficient energy utilization and voltage stabilization for biomedical applications. Power quality in the implementation of the smart grid in biomedical devices is regarded to be the most problematic. APFs (Active Power Filter) are preferred to reward the related problems, mainly because they can quickly filter out of the PQ and are a dynamic compensation. The UPQC with a PI control unit with DC source to be converted to a three stage inverter based on Enhanced Whale Optimization Algorithm (EWOA) was precisely implemented in the article in order to eliminate voltage and current harmonics inadequate. Similarly, UPQC also used the Enhanced Whale Optimization Algorithm (EWOA). In this approach, UPQC along with EWOA (Enhanced whale optimization) has been introduced for voltage and current harmonics elimination defect specifically. Similarly, EWOA was too implemented with UPQC. UPQC & EWOA conducted a performance estimate by estimating a simulation, results on comparing the parameters of THD levels, load current and voltage. The performance estimate is also used and the results achieved are shown. In order to analyze THD values and validate the system performance, performance estimates are built and compared with THD values, load voltage and current parameters.
{"title":"Mathematical Modeling of Enhanced Whale Optimization Based Power Quality Enhancement Using Unified Power Quality Conditioner for Implantable Biomedical Devices","authors":"T. Arulkumar, N. Chandrasekaran","doi":"10.1166/jmihi.2021.3900","DOIUrl":"https://doi.org/10.1166/jmihi.2021.3900","url":null,"abstract":"Implantable biomedical systems that enable the majority of the functions of wireless implantable devices have made significant progress in recent years. Nonetheless, due to limited miniaturization, power distribution limits, and the unavailability of a stable link between implants and\u0000 external devices, such systems are primarily limited to investigation. Generating electricity from natural sources and human body movement for implantable biomedical devices has emerged as a viable option. Nowadays, energy sources become the emerging use of electricity grid which has formed\u0000 new challenges for the effectiveness of power quality, efficient energy utilization and voltage stabilization for biomedical applications. Power quality in the implementation of the smart grid in biomedical devices is regarded to be the most problematic. APFs (Active Power Filter) are preferred\u0000 to reward the related problems, mainly because they can quickly filter out of the PQ and are a dynamic compensation. The UPQC with a PI control unit with DC source to be converted to a three stage inverter based on Enhanced Whale Optimization Algorithm (EWOA) was precisely implemented in the\u0000 article in order to eliminate voltage and current harmonics inadequate. Similarly, UPQC also used the Enhanced Whale Optimization Algorithm (EWOA). In this approach, UPQC along with EWOA (Enhanced whale optimization) has been introduced for voltage and current harmonics elimination defect\u0000 specifically. Similarly, EWOA was too implemented with UPQC. UPQC & EWOA conducted a performance estimate by estimating a simulation, results on comparing the parameters of THD levels, load current and voltage. The performance estimate is also used and the results achieved are shown. In\u0000 order to analyze THD values and validate the system performance, performance estimates are built and compared with THD values, load voltage and current parameters.","PeriodicalId":393031,"journal":{"name":"J. Medical Imaging Health Informatics","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131391891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}