Pub Date : 2023-05-05DOI: 10.4015/s1016237223500114
F. Moayedi, J. Karimi, Seyed Ebrahim Dashti
Colon cancer is one of the most common spread cancers in the world, which leads to total death of 10%. Prediction of onset of cancer, and the cause of its development in these patients can be of an enormous help and relief to those affected, as they can get back their “normal” life. Data mining and machine learning are important intelligent tools for classification, prediction and hidden relation extraction between patient information. We collected data from Shahid Faghihi Hospital in Shiraz. Features collected are as follows: Gender, age, duration of cancer before surgery, number of times the patients used bathroom, taking anti-inflammatory drug prednisolone, duration of drug use and dosage, kind of surgery and number of times consulted and retreatment of surgery, incontinence, etc. After pre-processing and data cleaning stages, effective features were extracted, and also occurrence of cancer predicts by using different classification algorithms. Then association rule mining algorithms like Apriori were used for obtaining any internal hidden relation between entries. Approaching them with different algorithms and assessing them with support vector machine was with highest prediction accuracy (84%). Due to unbalanced dataset, we chose cost sensitive support vector machine. In another aspect, after applying Apriori algorithm, the conditions of non-inflammation were extracted based on dataset features. Some significant outcomes are in what follows. If surgery treatment or diagnosed was less than 5 years, the possibility of developing colon cancer is lower. Also, as the duration of disease increases, the possibility of reoperation increases, as confirmed by the interiors. Since this issue with these features was raised for the first time in this paper at the suggestion of internists, early detection of cancer and also the extraction of effective laws can be of help to the medical community. In future, to get higher accuracy, the improvement of the dataset in terms of number of samples and colonoscopy image features is considered.
{"title":"CANCER PREDICTION IN INFLAMMATORY BOWEL DISEASE PATIENTS BY USING MACHINE LEARNING ALGORITHMS","authors":"F. Moayedi, J. Karimi, Seyed Ebrahim Dashti","doi":"10.4015/s1016237223500114","DOIUrl":"https://doi.org/10.4015/s1016237223500114","url":null,"abstract":"Colon cancer is one of the most common spread cancers in the world, which leads to total death of 10%. Prediction of onset of cancer, and the cause of its development in these patients can be of an enormous help and relief to those affected, as they can get back their “normal” life. Data mining and machine learning are important intelligent tools for classification, prediction and hidden relation extraction between patient information. We collected data from Shahid Faghihi Hospital in Shiraz. Features collected are as follows: Gender, age, duration of cancer before surgery, number of times the patients used bathroom, taking anti-inflammatory drug prednisolone, duration of drug use and dosage, kind of surgery and number of times consulted and retreatment of surgery, incontinence, etc. After pre-processing and data cleaning stages, effective features were extracted, and also occurrence of cancer predicts by using different classification algorithms. Then association rule mining algorithms like Apriori were used for obtaining any internal hidden relation between entries. Approaching them with different algorithms and assessing them with support vector machine was with highest prediction accuracy (84%). Due to unbalanced dataset, we chose cost sensitive support vector machine. In another aspect, after applying Apriori algorithm, the conditions of non-inflammation were extracted based on dataset features. Some significant outcomes are in what follows. If surgery treatment or diagnosed was less than 5 years, the possibility of developing colon cancer is lower. Also, as the duration of disease increases, the possibility of reoperation increases, as confirmed by the interiors. Since this issue with these features was raised for the first time in this paper at the suggestion of internists, early detection of cancer and also the extraction of effective laws can be of help to the medical community. In future, to get higher accuracy, the improvement of the dataset in terms of number of samples and colonoscopy image features is considered.","PeriodicalId":8862,"journal":{"name":"Biomedical Engineering: Applications, Basis and Communications","volume":"25 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89059771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.4015/s1016237223500102
Nabil K. Al Shamaa, R. A. Fayadh, M. Wali
The detection of sleep is important because it contributes to most road accidents especially high levels of deep sleep while driving. Sleep detection is based on electrooculogram (EoG) signal as sleep causes various changes to this signal. Drivers travelling for long hours, especially those working under transportation field are more likely to sleep in the middle of their journey. In order to avoid this situation, drivers are aided with a system which is capable of monitoring the drivers’ condition depending on communication between the driving simulator and the subject EoG signal as many sleep detection devices are dependent upon eye behavior and movement in addition to pupil size and eye closure for certain periods. Therefore, to solve the problem of detecting sleep while driving, this work extracted different features from the EoG signal precisely from its frequency range (0–25[Formula: see text]Hz) and (25–37.5[Formula: see text]Hz) by discrete wavelet transform technique. In this research, 15 subjects have been set in a driving environment for more than 1[Formula: see text]h for collecting the sleep EoG signal data by low power sensors. The EoG signal is recorded using Cobra3 Data acquisition set and few features (minimum, maximum, mean, standard deviation (SD), mode, energy, median and variance) are extracted using discrete wavelet transform. These features have been used to classify three classes (sleep 0, sleep 0, sleep 1) using support vector machine (SVM). This classifier depends upon the fusion of the above features to get an accuracy of 78% for high-level sleep detection based on db4 wavelet.
{"title":"EoG COMMUNICATION SIGNAL FOR SLEEP LEVEL DETECTION","authors":"Nabil K. Al Shamaa, R. A. Fayadh, M. Wali","doi":"10.4015/s1016237223500102","DOIUrl":"https://doi.org/10.4015/s1016237223500102","url":null,"abstract":"The detection of sleep is important because it contributes to most road accidents especially high levels of deep sleep while driving. Sleep detection is based on electrooculogram (EoG) signal as sleep causes various changes to this signal. Drivers travelling for long hours, especially those working under transportation field are more likely to sleep in the middle of their journey. In order to avoid this situation, drivers are aided with a system which is capable of monitoring the drivers’ condition depending on communication between the driving simulator and the subject EoG signal as many sleep detection devices are dependent upon eye behavior and movement in addition to pupil size and eye closure for certain periods. Therefore, to solve the problem of detecting sleep while driving, this work extracted different features from the EoG signal precisely from its frequency range (0–25[Formula: see text]Hz) and (25–37.5[Formula: see text]Hz) by discrete wavelet transform technique. In this research, 15 subjects have been set in a driving environment for more than 1[Formula: see text]h for collecting the sleep EoG signal data by low power sensors. The EoG signal is recorded using Cobra3 Data acquisition set and few features (minimum, maximum, mean, standard deviation (SD), mode, energy, median and variance) are extracted using discrete wavelet transform. These features have been used to classify three classes (sleep 0, sleep 0, sleep 1) using support vector machine (SVM). This classifier depends upon the fusion of the above features to get an accuracy of 78% for high-level sleep detection based on db4 wavelet.","PeriodicalId":8862,"journal":{"name":"Biomedical Engineering: Applications, Basis and Communications","volume":"14 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77442336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-05DOI: 10.4015/s1016237223500126
Alka Singh, V. Gopi, Anju Thomas, Omkar Singh
Coronavirus Disease 2019 (COVID-19) is a terrible illness affecting the respiratory systems of animals and humans. By 2020, this sickness had become a pandemic, affecting millions worldwide. Prevention of the spread of the virus by conducting fast tests for many suspects has become difficult. Recently, many deep learning-based methods have been developed to automatically detect COVID-19 infection from lung Computed Tomography (CT) images of the chest. This paper proposes a novel dual-scale Convolutional Neural Network (CNN) architecture to detect COVID-19 from CT images. The network consists of two different convolutional blocks. Each path is similarly constructed with multi-scale feature extraction layers. The primary path consists of six convolutional layers. The extracted features from multipath networks are flattened with the help of dropout, and these relevant features are concatenated. The sigmoid function is used as the classifier to identify whether the input image is diseased. The proposed network obtained an accuracy of 99.19%, with an Area Under the Curve (AUC) value of 0.99. The proposed network has a lower computational cost than the existing methods regarding learnable parameters, the number of FLOPS, and memory requirements. The proposed CNN model inherits the benefits of densely linked paths and residuals by utilizing effective feature reuse methods. According to our experiments, the proposed approach outperforms previous algorithms and achieves state-of-the-art results.
{"title":"DUAL-SCALE CNN ARCHITECTURE FOR COVID-19 DETECTION FROM LUNG CT IMAGES","authors":"Alka Singh, V. Gopi, Anju Thomas, Omkar Singh","doi":"10.4015/s1016237223500126","DOIUrl":"https://doi.org/10.4015/s1016237223500126","url":null,"abstract":"Coronavirus Disease 2019 (COVID-19) is a terrible illness affecting the respiratory systems of animals and humans. By 2020, this sickness had become a pandemic, affecting millions worldwide. Prevention of the spread of the virus by conducting fast tests for many suspects has become difficult. Recently, many deep learning-based methods have been developed to automatically detect COVID-19 infection from lung Computed Tomography (CT) images of the chest. This paper proposes a novel dual-scale Convolutional Neural Network (CNN) architecture to detect COVID-19 from CT images. The network consists of two different convolutional blocks. Each path is similarly constructed with multi-scale feature extraction layers. The primary path consists of six convolutional layers. The extracted features from multipath networks are flattened with the help of dropout, and these relevant features are concatenated. The sigmoid function is used as the classifier to identify whether the input image is diseased. The proposed network obtained an accuracy of 99.19%, with an Area Under the Curve (AUC) value of 0.99. The proposed network has a lower computational cost than the existing methods regarding learnable parameters, the number of FLOPS, and memory requirements. The proposed CNN model inherits the benefits of densely linked paths and residuals by utilizing effective feature reuse methods. According to our experiments, the proposed approach outperforms previous algorithms and achieves state-of-the-art results.","PeriodicalId":8862,"journal":{"name":"Biomedical Engineering: Applications, Basis and Communications","volume":"45 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81263637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-04DOI: 10.4015/s1016237223500060
Farhad Abedinzadeh Torghabeh, Yeganeh Modaresnia, Mohammad Mahdi khalilzadeh
Alzheimer’s disease (AD) is the leading worldwide cause of dementia. It is a common brain disorder that significantly impacts daily life and slowly progresses from moderate to severe. Due to inaccuracy, lack of sensitivity, and imprecision, existing classification techniques are not yet a standard clinical approach. This paper proposes utilizing the Convolutional Neural Network (CNN) architecture to classify AD based on MRI images. Our primary objective is to use the capabilities of pre-trained CNNs to classify and predict dementia severity and to serve as an effective decision support system for physicians in predicting the severity of AD based on the degree of dementia. The standard Kaggle dataset is used to train and evaluate the classification model of dementia. Synthetic Minority Oversampling Technique (SMOTE) tackles the primary problem with the dataset, which is a disparity across classes. VGGNet16 with ReduceLROnPlateau is fine-tuned and assessed using testing data consisting of four stages of dementia and achieves an overall accuracy of 98.61% and a specificity of 99% for a multiclass classification, which is superior to current approaches. By selecting appropriate Initial Learning Rate (ILR) and scheduling it during the training phase, the proposed method has the benefit of causing the model to converge on local optimums with better performance.
{"title":"EFFECTIVENESS OF LEARNING RATE IN DEMENTIA SEVERITY PREDICTION USING VGG16","authors":"Farhad Abedinzadeh Torghabeh, Yeganeh Modaresnia, Mohammad Mahdi khalilzadeh","doi":"10.4015/s1016237223500060","DOIUrl":"https://doi.org/10.4015/s1016237223500060","url":null,"abstract":"Alzheimer’s disease (AD) is the leading worldwide cause of dementia. It is a common brain disorder that significantly impacts daily life and slowly progresses from moderate to severe. Due to inaccuracy, lack of sensitivity, and imprecision, existing classification techniques are not yet a standard clinical approach. This paper proposes utilizing the Convolutional Neural Network (CNN) architecture to classify AD based on MRI images. Our primary objective is to use the capabilities of pre-trained CNNs to classify and predict dementia severity and to serve as an effective decision support system for physicians in predicting the severity of AD based on the degree of dementia. The standard Kaggle dataset is used to train and evaluate the classification model of dementia. Synthetic Minority Oversampling Technique (SMOTE) tackles the primary problem with the dataset, which is a disparity across classes. VGGNet16 with ReduceLROnPlateau is fine-tuned and assessed using testing data consisting of four stages of dementia and achieves an overall accuracy of 98.61% and a specificity of 99% for a multiclass classification, which is superior to current approaches. By selecting appropriate Initial Learning Rate (ILR) and scheduling it during the training phase, the proposed method has the benefit of causing the model to converge on local optimums with better performance.","PeriodicalId":8862,"journal":{"name":"Biomedical Engineering: Applications, Basis and Communications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136314559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-28DOI: 10.4015/s1016237223500096
R. Ahalya, U. Snekhalatha, Palani Thanaraj Krishnan
The study aims to develop a computerized hybrid model using artificial intelligence (AI) for the detection of rheumatoid arthritis (RA) from hand radiographs. The objectives of the study include (i) segmentation of proximal interphalangeal (PIP), and metacarpophalangeal (MCP) joints using the deep learning (DL) method, and features are extracted using handcrafted feature extraction technique (ii) classification of RA and non-RA participants is performed using machine learning (ML) techniques. In the proposed study, the hand radiographs are resized to [Formula: see text] pixels and pre-processed using the various image processing techniques such as sharpening, median filtering, and adaptive histogram equalization. The segmentation of the finger joints is carried out using the U-Net model, and the segmented binary image is converted to gray scale image using the subtraction method. The features are extracted using the Harris feature extractor, and classification of the proposed work is performed using Random Forest and Adaboost ML classifiers. The study included 50 RA patients and 50 normal subjects for the evaluation of RA. Data augmentation is performed to increase the number of images for U-Net segmentation technique. For the classification of RA and healthy subjects, the Random Forest classifier obtained an accuracy of 91.25% whereas the Adaboost classifier had an accuracy of 90%. Thus, the hybrid model using a Random Forest classifier can be used as an effective system for the diagnosis of RA.
{"title":"HYBRID AI MODEL FOR THE DETECTION OF RHEUMATOID ARTHRITIS FROM HAND RADIOGRAPHS","authors":"R. Ahalya, U. Snekhalatha, Palani Thanaraj Krishnan","doi":"10.4015/s1016237223500096","DOIUrl":"https://doi.org/10.4015/s1016237223500096","url":null,"abstract":"The study aims to develop a computerized hybrid model using artificial intelligence (AI) for the detection of rheumatoid arthritis (RA) from hand radiographs. The objectives of the study include (i) segmentation of proximal interphalangeal (PIP), and metacarpophalangeal (MCP) joints using the deep learning (DL) method, and features are extracted using handcrafted feature extraction technique (ii) classification of RA and non-RA participants is performed using machine learning (ML) techniques. In the proposed study, the hand radiographs are resized to [Formula: see text] pixels and pre-processed using the various image processing techniques such as sharpening, median filtering, and adaptive histogram equalization. The segmentation of the finger joints is carried out using the U-Net model, and the segmented binary image is converted to gray scale image using the subtraction method. The features are extracted using the Harris feature extractor, and classification of the proposed work is performed using Random Forest and Adaboost ML classifiers. The study included 50 RA patients and 50 normal subjects for the evaluation of RA. Data augmentation is performed to increase the number of images for U-Net segmentation technique. For the classification of RA and healthy subjects, the Random Forest classifier obtained an accuracy of 91.25% whereas the Adaboost classifier had an accuracy of 90%. Thus, the hybrid model using a Random Forest classifier can be used as an effective system for the diagnosis of RA.","PeriodicalId":8862,"journal":{"name":"Biomedical Engineering: Applications, Basis and Communications","volume":"3 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82836287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-27DOI: 10.4015/s1016237223500047
Thanakorn Phumkuea, Phurich Nilvisut, T. Wongsirichot, Kasikrit Damkliang
Malaria is a life-threatening mosquito-borne disease. Recently, the number of malaria cases has increased worldwide, threatening vulnerable populations. Malaria is responsible for a high rate of morbidity and mortality in people all around the world. Each year, many people, die from this disease, according to the World Health Organization (WHO). Thick and thin blood smears are used to determine parasite habitation and computer-aided diagnosis (CADx) techniques using machine learning (ML) are being used to assist. CADx reduces traditional diagnosis time, lessens socio-economic impact, and improves quality of life. This study develops a simplified model with selective features to reduce processing power and further shorten diagnostic time, which is important to resource-constrained areas. To improve overall classification results, we use a decision tree (DT)-based approach with image pre-processing called optimal features to identify optimal features. Various feature selection and extraction techniques are used, including information gain (IG). Our proposed model is compared to a benchmark state-of-art classification model. For an unseen dataset, our proposed model achieves accuracy, precision, recall, F-score, and processing time of 0.956, 0.949, 0.964, 0.956, and 9.877 s, respectively. Furthermore, our proposed model’s training time is less than those of the state-of-the-art classification model, while the performance metrics are comparable.
{"title":"A NEW COMPUTER-AIDED DIAGNOSIS OF PRECISE MALARIA PARASITE DETECTION IN MICROSCOPIC IMAGES USING A DECISION TREE MODEL WITH SELECTIVE OPTIMAL FEATURES","authors":"Thanakorn Phumkuea, Phurich Nilvisut, T. Wongsirichot, Kasikrit Damkliang","doi":"10.4015/s1016237223500047","DOIUrl":"https://doi.org/10.4015/s1016237223500047","url":null,"abstract":"Malaria is a life-threatening mosquito-borne disease. Recently, the number of malaria cases has increased worldwide, threatening vulnerable populations. Malaria is responsible for a high rate of morbidity and mortality in people all around the world. Each year, many people, die from this disease, according to the World Health Organization (WHO). Thick and thin blood smears are used to determine parasite habitation and computer-aided diagnosis (CADx) techniques using machine learning (ML) are being used to assist. CADx reduces traditional diagnosis time, lessens socio-economic impact, and improves quality of life. This study develops a simplified model with selective features to reduce processing power and further shorten diagnostic time, which is important to resource-constrained areas. To improve overall classification results, we use a decision tree (DT)-based approach with image pre-processing called optimal features to identify optimal features. Various feature selection and extraction techniques are used, including information gain (IG). Our proposed model is compared to a benchmark state-of-art classification model. For an unseen dataset, our proposed model achieves accuracy, precision, recall, F-score, and processing time of 0.956, 0.949, 0.964, 0.956, and 9.877 s, respectively. Furthermore, our proposed model’s training time is less than those of the state-of-the-art classification model, while the performance metrics are comparable.","PeriodicalId":8862,"journal":{"name":"Biomedical Engineering: Applications, Basis and Communications","volume":"58 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84009342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-27DOI: 10.4015/s1016237223500059
Shokufeh Akbari, Faraz Edadi Ebrahimi, Mehdi Rajabioun
Nowadays, the world confronts a highly infectious pandemic called coronavirus (COVID-19) and over 4 million people worldwide have now died from this illness. So, early detection of COVID-19 outbreak and distinguishing it from other diseases with the same physical symptoms can give enough time for treatment with true positive results and prevent coma or death. For early recognition of COVID-19, several methods for each modality are proposed. Although there are some modalities for COVID-19 detection, electrocardiography (ECG) is one of the fastest, the most accessible, the cheapest and the safest one. This paper proposed a new method for classifying COVID-19 patients from other cardiovascular disease by ECG signals. In the proposed method, Resnet50v2 which is a kind of convolutional neural network, is used for classification. In this paper because of image format of data, first data with image format are applied to the network and then for comparison, ECG images are changed to signal format and classification is done. These two strategies are used for COVID-19 classification from other cardiac abnormalities with different filter sizes and the results of strategies are compared with each other and other methods in this field. As it can be concluded from the results, signal-based data give better accuracy than image classification at best performance and it is better to change the image format to signals for classification. The second result can be found by comparing with other methods in this field, the proposed method of this paper gives better performance with high accuracy in COVID-19 classification.
{"title":"DETECTION AND CLASSIFICATION OF COVID-19 CASES FROM OTHER CARDIOVASCULAR CLASSES FROM ELECTROCARDIOGRAPHY SIGNALS USING DEEP LEARNING AND ResNet NETWORK","authors":"Shokufeh Akbari, Faraz Edadi Ebrahimi, Mehdi Rajabioun","doi":"10.4015/s1016237223500059","DOIUrl":"https://doi.org/10.4015/s1016237223500059","url":null,"abstract":"Nowadays, the world confronts a highly infectious pandemic called coronavirus (COVID-19) and over 4 million people worldwide have now died from this illness. So, early detection of COVID-19 outbreak and distinguishing it from other diseases with the same physical symptoms can give enough time for treatment with true positive results and prevent coma or death. For early recognition of COVID-19, several methods for each modality are proposed. Although there are some modalities for COVID-19 detection, electrocardiography (ECG) is one of the fastest, the most accessible, the cheapest and the safest one. This paper proposed a new method for classifying COVID-19 patients from other cardiovascular disease by ECG signals. In the proposed method, Resnet50v2 which is a kind of convolutional neural network, is used for classification. In this paper because of image format of data, first data with image format are applied to the network and then for comparison, ECG images are changed to signal format and classification is done. These two strategies are used for COVID-19 classification from other cardiac abnormalities with different filter sizes and the results of strategies are compared with each other and other methods in this field. As it can be concluded from the results, signal-based data give better accuracy than image classification at best performance and it is better to change the image format to signals for classification. The second result can be found by comparing with other methods in this field, the proposed method of this paper gives better performance with high accuracy in COVID-19 classification.","PeriodicalId":8862,"journal":{"name":"Biomedical Engineering: Applications, Basis and Communications","volume":"108 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79725804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-01DOI: 10.4015/s101623722250051x
K. Dhandapani, P. Vinupritha, D. Parimala, E. J. Eucharista
Background: Osteoporosis results in an increased risk of fracture among aging women. A strong connection exists for bone health with tooth loss, menopause, diet, BMI and hysterectomy. Purpose: To study the impact of heel BMD with age, BMI, menopausal status, hysterectomy and tooth loss among people living in Chennai metropolitan neighborhood. Materials and Methods: The study involved ([Formula: see text], age: [Formula: see text] years) women, which included women with normal BMD ([Formula: see text] = 35, age: [Formula: see text] years), Osteopenia ([Formula: see text], age: [Formula: see text] years) and Osteoporosis ([Formula: see text], age: [Formula: see text] years). All the participants underwent BMD assessment at their right heel using an Ultrasound densitometer system (Model: CM-200, Manufacturer: FURUNO ELECTRIC CO. LTD., Japan). The subjects were classified into various subgroups based on BMD, age, Menopausal status, hysterectomy and tooth loss. Results: The mean age of women attaining menopause and those undergoing hysterectomy are [Formula: see text] years and [Formula: see text] years, respectively. The decrease of heel BMD was very prominent among women having more than two tooth extracted, menopause and hysterectomy. It was found that approximately 90% of the studied population were suffering from either osteopenia or osteoporosis in their post-menopausal period. Conclusion: Women aged above 50 years are at greater risk of osteoporosis due to post-menopausal phase, high probability of undergoing hysterectomy and tooth loss. Therefore, women should ensure sufficient consumption of calcium rich diet in their entire life cycle to ensure a healthy livelihood.
背景:骨质疏松导致老年妇女骨折风险增加。骨质健康与牙齿脱落、更年期、饮食、身体质量指数和子宫切除密切相关。目的:研究金奈城区居民足跟骨密度与年龄、BMI、绝经状况、子宫切除和牙齿脱落的关系。材料与方法:本研究涉及([公式:见文],年龄:[公式:见文]年)女性,其中包括骨密度正常的女性([公式:见文]= 35岁,年龄:[公式:见文]年)、骨质疏松([公式:见文]年)和骨质疏松([公式:见文]年,年龄:[公式:见文]年)。所有参与者使用超声密度计系统(型号:CM-200,制造商:FURUNO ELECTRIC CO. LTD, Japan)对右脚跟进行骨密度评估。研究对象根据骨密度、年龄、绝经状态、子宫切除和牙齿脱落情况被分为不同的亚组。结果:绝经妇女和子宫切除术妇女的平均年龄分别为[公式:见文]年和[公式:见文]年。在拔牙两颗以上、绝经和子宫切除的女性中,足跟骨密度的下降尤为明显。研究发现,大约90%的被研究人群在绝经后出现骨质减少或骨质疏松症。结论:50岁以上妇女绝经后、子宫切除和牙齿脱落的可能性较大,骨质疏松的发生风险较大。因此,女性在整个生命周期中都应确保摄入足够的富钙饮食,以确保健康的生活。
{"title":"INVESTIGATIONS ON OSTEOPOROTIC FRACTURE RISK ASSESSMENT AMONG SOUTH INDIAN WOMEN","authors":"K. Dhandapani, P. Vinupritha, D. Parimala, E. J. Eucharista","doi":"10.4015/s101623722250051x","DOIUrl":"https://doi.org/10.4015/s101623722250051x","url":null,"abstract":"Background: Osteoporosis results in an increased risk of fracture among aging women. A strong connection exists for bone health with tooth loss, menopause, diet, BMI and hysterectomy. Purpose: To study the impact of heel BMD with age, BMI, menopausal status, hysterectomy and tooth loss among people living in Chennai metropolitan neighborhood. Materials and Methods: The study involved ([Formula: see text], age: [Formula: see text] years) women, which included women with normal BMD ([Formula: see text] = 35, age: [Formula: see text] years), Osteopenia ([Formula: see text], age: [Formula: see text] years) and Osteoporosis ([Formula: see text], age: [Formula: see text] years). All the participants underwent BMD assessment at their right heel using an Ultrasound densitometer system (Model: CM-200, Manufacturer: FURUNO ELECTRIC CO. LTD., Japan). The subjects were classified into various subgroups based on BMD, age, Menopausal status, hysterectomy and tooth loss. Results: The mean age of women attaining menopause and those undergoing hysterectomy are [Formula: see text] years and [Formula: see text] years, respectively. The decrease of heel BMD was very prominent among women having more than two tooth extracted, menopause and hysterectomy. It was found that approximately 90% of the studied population were suffering from either osteopenia or osteoporosis in their post-menopausal period. Conclusion: Women aged above 50 years are at greater risk of osteoporosis due to post-menopausal phase, high probability of undergoing hysterectomy and tooth loss. Therefore, women should ensure sufficient consumption of calcium rich diet in their entire life cycle to ensure a healthy livelihood.","PeriodicalId":8862,"journal":{"name":"Biomedical Engineering: Applications, Basis and Communications","volume":"58 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77402995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-14DOI: 10.4015/s1016237223500023
M. S. Fathimal, S. P. A. Kirubha, A. Jeya Prabha, S. Jothiraj
Diabetes mellitus (DM) indicates elevated glucose concentration in blood. In type 1 diabetes, the pancreas produces inadequate insulin whereas in type 2 diabetes, the body is incapable to utilize the insulin present. Insulin is required to transport glucose into the cells. The insulin resistance by the cells causes the glucose level in the blood to increase. At present, the clinical methods available to diagnose DM are invasive. The diagnosis of DM is done by either pricking the fingertip or drawing blood from the vein followed by the quantification of blood glucose in terms of [Formula: see text]. Continuous monitoring is limited as skin is punctured or venous blood is extracted. Spectroscopic analysis of hair, nail, saliva and urine possess the potential to differentiate the hyperglycaemic from the healthy subjects facilitating non-intrusive diagnosis of diabetes. The variation in the incident wavelength following the interaction with the sample is measured by a spectrometer. Based on the energy of the excitation source, the molecular structures present in the sample will either vibrate or absorb and emit photons that produce a spectrum. The samples were collected from both the groups of subjects and pre-processed prior to further examination. The samples were then characterized using the Fourier-transform infrared (FTIR) spectroscopy. The spectral output was pre-processed, filtered and analyzed so as to discriminate between the diabetic and healthy subjects. Although the spectral band of nail and hair samples appears to be identical, a difference in the amplitude was observed between both diabetic and normal subjects at 1450, 1520, 1632, 2925 cm[Formula: see text]. The area under curve (AUC) in the range of 3600 to 3100 cm-1 is a prominent marker in the discrimination. The peak wavelength and AUC were utilized as a biomarker to discriminate the diabetic and normal individuals.
{"title":"AN OPTICAL APPROACH FOR BLOODLESS, IN-VITRO AND NON-INVASIVE GLUCOSE MONITORING","authors":"M. S. Fathimal, S. P. A. Kirubha, A. Jeya Prabha, S. Jothiraj","doi":"10.4015/s1016237223500023","DOIUrl":"https://doi.org/10.4015/s1016237223500023","url":null,"abstract":"Diabetes mellitus (DM) indicates elevated glucose concentration in blood. In type 1 diabetes, the pancreas produces inadequate insulin whereas in type 2 diabetes, the body is incapable to utilize the insulin present. Insulin is required to transport glucose into the cells. The insulin resistance by the cells causes the glucose level in the blood to increase. At present, the clinical methods available to diagnose DM are invasive. The diagnosis of DM is done by either pricking the fingertip or drawing blood from the vein followed by the quantification of blood glucose in terms of [Formula: see text]. Continuous monitoring is limited as skin is punctured or venous blood is extracted. Spectroscopic analysis of hair, nail, saliva and urine possess the potential to differentiate the hyperglycaemic from the healthy subjects facilitating non-intrusive diagnosis of diabetes. The variation in the incident wavelength following the interaction with the sample is measured by a spectrometer. Based on the energy of the excitation source, the molecular structures present in the sample will either vibrate or absorb and emit photons that produce a spectrum. The samples were collected from both the groups of subjects and pre-processed prior to further examination. The samples were then characterized using the Fourier-transform infrared (FTIR) spectroscopy. The spectral output was pre-processed, filtered and analyzed so as to discriminate between the diabetic and healthy subjects. Although the spectral band of nail and hair samples appears to be identical, a difference in the amplitude was observed between both diabetic and normal subjects at 1450, 1520, 1632, 2925 cm[Formula: see text]. The area under curve (AUC) in the range of 3600 to 3100 cm-1 is a prominent marker in the discrimination. The peak wavelength and AUC were utilized as a biomarker to discriminate the diabetic and normal individuals.","PeriodicalId":8862,"journal":{"name":"Biomedical Engineering: Applications, Basis and Communications","volume":"35 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86425024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-14DOI: 10.4015/s1016237222500557
J. G. Precious, S. P. A. Kirubha, R. Premkumar, I. K. Evangeline
The brain tumor is the most common destructive and deadly disease. In general, various imaging modalities such as CT, MRI and PET are used to evaluate the brain tumor. Magnetic resonance imaging (MRI) is a prominent diagnostic method for evaluating these tumors. Gliomas, due to their malignant nature and rapid development, are the most common and aggressive form of brain tumors. In the clinical routine, the method of identifying tumor borders from healthy cells is still a difficult task. Manual segmentation takes time, so we use a deep convolutional neural network to improve efficiency. We present a combined DNN architecture using U-net and MobilenetV2. It exploits both local characteristics and more global contextual characteristics from the 2D MRI FLAIR images. The proposed network has encoder and decoder architecture. The performance metrices such as dice loss, dice coefficient, accuracy and IOU have been calculated. Automated segmentation of 3D MRI is essential for the identification, assessment, and treatment of brain tumors although there is significant interest in machine-learning algorithms for computerized segmentation of brain tumors. The goal of this work is to perform 3D volumetric segmentation using BraTumIA. It is a widely available software application used to separate tumor characteristics on 3D brain MR volumes. BraTumIA has lately been used in a number of clinical trials. In this work, we have segmented 2D slices and 3D volumes of MRI brain tumor images.
{"title":"AUTOMATIC 2D AND 3D SEGMENTATION OF GLIOBLASTOMA BRAIN TUMOR","authors":"J. G. Precious, S. P. A. Kirubha, R. Premkumar, I. K. Evangeline","doi":"10.4015/s1016237222500557","DOIUrl":"https://doi.org/10.4015/s1016237222500557","url":null,"abstract":"The brain tumor is the most common destructive and deadly disease. In general, various imaging modalities such as CT, MRI and PET are used to evaluate the brain tumor. Magnetic resonance imaging (MRI) is a prominent diagnostic method for evaluating these tumors. Gliomas, due to their malignant nature and rapid development, are the most common and aggressive form of brain tumors. In the clinical routine, the method of identifying tumor borders from healthy cells is still a difficult task. Manual segmentation takes time, so we use a deep convolutional neural network to improve efficiency. We present a combined DNN architecture using U-net and MobilenetV2. It exploits both local characteristics and more global contextual characteristics from the 2D MRI FLAIR images. The proposed network has encoder and decoder architecture. The performance metrices such as dice loss, dice coefficient, accuracy and IOU have been calculated. Automated segmentation of 3D MRI is essential for the identification, assessment, and treatment of brain tumors although there is significant interest in machine-learning algorithms for computerized segmentation of brain tumors. The goal of this work is to perform 3D volumetric segmentation using BraTumIA. It is a widely available software application used to separate tumor characteristics on 3D brain MR volumes. BraTumIA has lately been used in a number of clinical trials. In this work, we have segmented 2D slices and 3D volumes of MRI brain tumor images.","PeriodicalId":8862,"journal":{"name":"Biomedical Engineering: Applications, Basis and Communications","volume":"13 1","pages":""},"PeriodicalIF":0.9,"publicationDate":"2023-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78456789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}