首页 > 最新文献

2020 Medical Technologies Congress (TIPTEKNO)最新文献

英文 中文
Wireless ECG Device with Arduino 基于Arduino的无线心电设备
Pub Date : 2020-11-19 DOI: 10.1109/TIPTEKNO50054.2020.9299248
Halil Güvenç
Electrocardiography is the process of recording heartbeat. The output is typically represented as a scaled graphical Figure called Electrocardiogram (ECG). In this study, we present an experimental device that obtains ECG signal using AD8232 sensor board. The device operates real-time and transmits data wirelessly using nRF24L01+ RF modules located on Arduino Mega2560 I/O boards. The received ECG data was filtered and processed with Matlab.
心电图是记录心跳的过程。输出通常表示为称为心电图(ECG)的缩放图形。在本研究中,我们提出了一种利用AD8232传感器板获取心电信号的实验装置。该设备使用Arduino Mega2560 I/O板上的nRF24L01+ RF模块进行实时操作和无线传输数据。用Matlab对接收到的心电数据进行滤波处理。
{"title":"Wireless ECG Device with Arduino","authors":"Halil Güvenç","doi":"10.1109/TIPTEKNO50054.2020.9299248","DOIUrl":"https://doi.org/10.1109/TIPTEKNO50054.2020.9299248","url":null,"abstract":"Electrocardiography is the process of recording heartbeat. The output is typically represented as a scaled graphical Figure called Electrocardiogram (ECG). In this study, we present an experimental device that obtains ECG signal using AD8232 sensor board. The device operates real-time and transmits data wirelessly using nRF24L01+ RF modules located on Arduino Mega2560 I/O boards. The received ECG data was filtered and processed with Matlab.","PeriodicalId":426945,"journal":{"name":"2020 Medical Technologies Congress (TIPTEKNO)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132422406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Detecting Alzheimer Disease on FDG PET Images Using a Similarity Index Based on Mutual Information 基于互信息相似性指数的FDG PET图像阿尔茨海默病检测
Pub Date : 2020-11-19 DOI: 10.1109/TIPTEKNO50054.2020.9299268
E. Polat, A. Güvenis
Mutual information is an image similarity metric often used for the robust registration of multimodality images. The aim of this study is to investigate the use of a simple to implement similarity computation method based on a mutual information index for the automated detection of Alzheimer’s disease from FDG PET studies. 102 healthy and 95 Alzheimer’s disease FDG PET patient images from the online Alzheimer’s disease Neuroimaging Initiative (ADNI) database were used to develop and test the system. Images were preprocessed for enabling comparison. An index was computed for each new image based on its degree of similarity to images belonging to AD patients versus healthy control patients. Classification was made based on the value of this index. The leave-one-out method was used for performance evaluation. Performance was evaluated using Receiver Operating Characteristic (ROC) curves. The diagnostic reliability given by the area under the curve (AUC) was determined as $0.857pm 0.0261$. The results suggest that a mutual information based image similarity method can potentially be useful as a second opinion computer aided diagnostic (CAD) system providing verification to visual and black box approaches. The system does not need training with new data and does not require the computation of image features.
互信息是一种图像相似度度量,常用于多模态图像的鲁棒配准。本研究的目的是研究一种基于互信息索引的简单易行的相似性计算方法,用于FDG PET研究中阿尔茨海默病的自动检测。102名健康患者和95名阿尔茨海默病患者的FDG PET图像来自在线阿尔茨海默病神经影像学倡议(ADNI)数据库,用于开发和测试该系统。图像经过预处理,便于比较。根据每个新图像与AD患者与健康对照患者图像的相似程度计算一个指数。根据该指标的值进行分类。采用留一法进行性能评价。采用受试者工作特征(ROC)曲线评价疗效。曲线下面积(AUC)给出的诊断可靠性确定为0.857pm 0.0261$。结果表明,基于互信息的图像相似度方法可以潜在地作为第二意见计算机辅助诊断(CAD)系统,为视觉和黑盒方法提供验证。该系统不需要使用新数据进行训练,也不需要计算图像特征。
{"title":"Detecting Alzheimer Disease on FDG PET Images Using a Similarity Index Based on Mutual Information","authors":"E. Polat, A. Güvenis","doi":"10.1109/TIPTEKNO50054.2020.9299268","DOIUrl":"https://doi.org/10.1109/TIPTEKNO50054.2020.9299268","url":null,"abstract":"Mutual information is an image similarity metric often used for the robust registration of multimodality images. The aim of this study is to investigate the use of a simple to implement similarity computation method based on a mutual information index for the automated detection of Alzheimer’s disease from FDG PET studies. 102 healthy and 95 Alzheimer’s disease FDG PET patient images from the online Alzheimer’s disease Neuroimaging Initiative (ADNI) database were used to develop and test the system. Images were preprocessed for enabling comparison. An index was computed for each new image based on its degree of similarity to images belonging to AD patients versus healthy control patients. Classification was made based on the value of this index. The leave-one-out method was used for performance evaluation. Performance was evaluated using Receiver Operating Characteristic (ROC) curves. The diagnostic reliability given by the area under the curve (AUC) was determined as $0.857pm 0.0261$. The results suggest that a mutual information based image similarity method can potentially be useful as a second opinion computer aided diagnostic (CAD) system providing verification to visual and black box approaches. The system does not need training with new data and does not require the computation of image features.","PeriodicalId":426945,"journal":{"name":"2020 Medical Technologies Congress (TIPTEKNO)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130971862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TIPTEKNO 2020 TOC
Pub Date : 2020-11-19 DOI: 10.1109/tiptekno50054.2020.9299230
{"title":"TIPTEKNO 2020 TOC","authors":"","doi":"10.1109/tiptekno50054.2020.9299230","DOIUrl":"https://doi.org/10.1109/tiptekno50054.2020.9299230","url":null,"abstract":"","PeriodicalId":426945,"journal":{"name":"2020 Medical Technologies Congress (TIPTEKNO)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120963079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Based Facial Emotion Recognition System 基于深度学习的面部情绪识别系统
Pub Date : 2020-11-19 DOI: 10.1109/TIPTEKNO50054.2020.9299256
Mehmet Akif Ozdemir, Berkay Elagoz, Aysegul Alaybeyoglu Soy, A. Akan
In this study, it was aimed to recognize the emotional state from facial images using the deep learning method. In the study, which was approved by the ethics committee, a custom data set was created using videos taken from 20 male and 20 female participants while simulating 7 different facial expressions (happy, sad, surprised, angry, disgusted, scared, and neutral). Firstly, obtained videos were divided into image frames, and then face images were segmented using the Haar library from image frames. The size of the custom data set obtained after the image preprocessing is more than 25 thousand images. The proposed convolutional neural network (CNN) architecture which is mimics of LeNet architecture has been trained with this custom dataset. According to the proposed CNN architecture experiment results, the training loss was found as 0.0115, the training accuracy was found as 99.62%, the validation loss was 0.0109, and the validation accuracy was 99.71%.
本研究旨在利用深度学习方法从面部图像中识别情绪状态。在这项由伦理委员会批准的研究中,使用20名男性和20名女性参与者的视频创建了一个自定义数据集,同时模拟7种不同的面部表情(快乐、悲伤、惊讶、愤怒、厌恶、害怕和中立)。首先将获取的视频分割成图像帧,然后利用图像帧中的Haar库对人脸图像进行分割。图像预处理后获得的自定义数据集的大小超过2.5万张。利用该自定义数据集训练了模仿LeNet架构的卷积神经网络(CNN)架构。根据提出的CNN架构实验结果,训练损失为0.0115,训练准确率为99.62%,验证损失为0.0109,验证准确率为99.71%。
{"title":"Deep Learning Based Facial Emotion Recognition System","authors":"Mehmet Akif Ozdemir, Berkay Elagoz, Aysegul Alaybeyoglu Soy, A. Akan","doi":"10.1109/TIPTEKNO50054.2020.9299256","DOIUrl":"https://doi.org/10.1109/TIPTEKNO50054.2020.9299256","url":null,"abstract":"In this study, it was aimed to recognize the emotional state from facial images using the deep learning method. In the study, which was approved by the ethics committee, a custom data set was created using videos taken from 20 male and 20 female participants while simulating 7 different facial expressions (happy, sad, surprised, angry, disgusted, scared, and neutral). Firstly, obtained videos were divided into image frames, and then face images were segmented using the Haar library from image frames. The size of the custom data set obtained after the image preprocessing is more than 25 thousand images. The proposed convolutional neural network (CNN) architecture which is mimics of LeNet architecture has been trained with this custom dataset. According to the proposed CNN architecture experiment results, the training loss was found as 0.0115, the training accuracy was found as 99.62%, the validation loss was 0.0109, and the validation accuracy was 99.71%.","PeriodicalId":426945,"journal":{"name":"2020 Medical Technologies Congress (TIPTEKNO)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127558044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Firefly Algorithm Based Feature Selection for EEG Signal Classification 基于萤火虫算法的脑电信号特征选择
Pub Date : 2020-11-19 DOI: 10.1109/TIPTEKNO50054.2020.9299273
Ebru Ergün, O. Aydemir
Brain-computer interfaces (BCIs) recognize specific features of a person’s brain signal relating to his/her intent, and output a control command that controls the outside devices or computers. BCI systems facilitate the lives of patients who cannot move any muscles but have no cognitive disorder. The high dimensions of features represent a research challenge. In recent years, especially nature inspired heuristic optimization algorithms became popular in order to eliminate unnecessary features. This paper addresses a crucial factor for effective classification of motor imaginary based EEG signals that are an optimal selection of relevant EEG features using firefly algorithm. Firefly algorithm (FA) works on the principle of directing the less shiny than the light intensity emitted by fireflies in nature towards the bright. The algorithm can adaptively select the best subset of features and improve classification accuracy. In this study, following extracted Katz Fractal Dimension based features, effective feature(s) were selected by FA. The proposed method successfully applied on open access dataset which was collected from 29 subjects. We obtained an average 76.14% classification accuracy (CA) using k-nearest neighbor classifier. This is 4.4% higher than the CA calculated by using all features. These results proved that used method is robust for this dataset.
脑机接口(bci)识别一个人的大脑信号中与他/她的意图相关的特定特征,并输出控制命令来控制外部设备或计算机。脑机接口系统改善了那些不能移动任何肌肉但没有认知障碍的患者的生活。高维特征是一个研究挑战。近年来,为了消除不必要的特征,特别是自然启发的启发式优化算法得到了广泛的应用。本文解决了基于运动想象的脑电信号有效分类的一个关键因素,即利用萤火虫算法对相关脑电信号特征进行优化选择。萤火虫算法(Firefly algorithm, FA)的工作原理是将自然界中萤火虫发出的光强度小于其亮度的部分引导到明亮的部分。该算法能够自适应选择特征的最佳子集,提高分类精度。在本研究中,提取了基于Katz分形维数的特征后,利用FA选择有效特征。该方法成功地应用于29个主题的开放获取数据集。我们使用k近邻分类器获得了平均76.14%的分类准确率(CA)。这比使用所有特性计算的CA高4.4%。结果表明,该方法对该数据集具有较好的鲁棒性。
{"title":"Firefly Algorithm Based Feature Selection for EEG Signal Classification","authors":"Ebru Ergün, O. Aydemir","doi":"10.1109/TIPTEKNO50054.2020.9299273","DOIUrl":"https://doi.org/10.1109/TIPTEKNO50054.2020.9299273","url":null,"abstract":"Brain-computer interfaces (BCIs) recognize specific features of a person’s brain signal relating to his/her intent, and output a control command that controls the outside devices or computers. BCI systems facilitate the lives of patients who cannot move any muscles but have no cognitive disorder. The high dimensions of features represent a research challenge. In recent years, especially nature inspired heuristic optimization algorithms became popular in order to eliminate unnecessary features. This paper addresses a crucial factor for effective classification of motor imaginary based EEG signals that are an optimal selection of relevant EEG features using firefly algorithm. Firefly algorithm (FA) works on the principle of directing the less shiny than the light intensity emitted by fireflies in nature towards the bright. The algorithm can adaptively select the best subset of features and improve classification accuracy. In this study, following extracted Katz Fractal Dimension based features, effective feature(s) were selected by FA. The proposed method successfully applied on open access dataset which was collected from 29 subjects. We obtained an average 76.14% classification accuracy (CA) using k-nearest neighbor classifier. This is 4.4% higher than the CA calculated by using all features. These results proved that used method is robust for this dataset.","PeriodicalId":426945,"journal":{"name":"2020 Medical Technologies Congress (TIPTEKNO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127693197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
The Effect of Photobiomodulation with Red and Near-Infrared Wavelengths on Keratinocyte Cells 红光和近红外波长光生物调节对角质细胞的影响
Pub Date : 2020-11-19 DOI: 10.1109/TIPTEKNO50054.2020.9299214
Merve Özdemir, Ziyşan Buse Yaralı Çevik, N. Topaloglu
Photobiomodulation (PBM) is defined as the use of non-ionizing photonic energy to trigger photochemical changes, particularly in mitochondrial-sensitive cellular structures. Photobiomodulation is a form of treatment used in medicine in a practical and noninvasive way and it has a significant role in inflammation, ache, and pain reduction, wound healing, and tissue regeneration. It triggers proliferation and the activity of the cell, primarily by utilizing light from the near infrared-red to visible wavelength of the light (630-1000 nm). This in vitro study has analyzed comparatively the most appropriate energy doses with the wavelengths in the red and near-infrared spectrum to induce photobiomodulation on the keratinocyte cells. 1, 3, and $5mathrm{J}/ mathrm{m}^{2}$ energy densities of 655 nm and 808 nm diode lasers were used, which might affect wound healing mechanism and cell proliferation. The potential stimulating effect of photobiomodulation to promote wound healing and cell proliferation on human keratinocyte cells was analyzed via microscopic imaging of cell morphology, MTT analysis for cell proliferation and scratch assay for wound closure after light applications. The highest increase in cell viability was obtained with a rate of 112.6% after the triple treatment of 655-nm wavelength at 1 J/cm2. The best wound closure was achieved with a rate of 45% after the triple treatment of 655 nm wavelength at 3 J/cm2. This study revealed that PBM with 655-nm of wavelength was an effective tool to induce cell proliferation and speed up the wound healing process with specific energy doses.
光生物调节(PBM)被定义为使用非电离光子能量来触发光化学变化,特别是在线粒体敏感的细胞结构中。光生物调节是一种实用且无创的医学治疗方法,它在炎症、疼痛、减轻疼痛、伤口愈合和组织再生方面具有重要作用。它主要通过利用近红外光至可见光波长(630-1000纳米)的光来触发细胞的增殖和活性。本体外实验对比分析了红光和近红外光谱波长诱导角质细胞光生物调节的最适宜能量剂量。利用655 nm和808 nm二极管激光器的1、3和$5 mathm {J}/ mathm {m}^{2}$能量密度对伤口愈合机制和细胞增殖产生影响。通过细胞形态学的显微成像、细胞增殖的MTT分析和伤口愈合的划痕实验,分析光生物调节对人角化细胞促进伤口愈合和细胞增殖的潜在刺激作用。655 nm波长1 J/cm2三次处理后,细胞活力增加率最高,为112.6%。在655 nm波长3 J/cm2的三重处理下,伤口愈合率为45%,达到最佳。本研究表明,波长为655 nm的PBM在特定能量剂量下可有效诱导细胞增殖,加速伤口愈合过程。
{"title":"The Effect of Photobiomodulation with Red and Near-Infrared Wavelengths on Keratinocyte Cells","authors":"Merve Özdemir, Ziyşan Buse Yaralı Çevik, N. Topaloglu","doi":"10.1109/TIPTEKNO50054.2020.9299214","DOIUrl":"https://doi.org/10.1109/TIPTEKNO50054.2020.9299214","url":null,"abstract":"Photobiomodulation (PBM) is defined as the use of non-ionizing photonic energy to trigger photochemical changes, particularly in mitochondrial-sensitive cellular structures. Photobiomodulation is a form of treatment used in medicine in a practical and noninvasive way and it has a significant role in inflammation, ache, and pain reduction, wound healing, and tissue regeneration. It triggers proliferation and the activity of the cell, primarily by utilizing light from the near infrared-red to visible wavelength of the light (630-1000 nm). This in vitro study has analyzed comparatively the most appropriate energy doses with the wavelengths in the red and near-infrared spectrum to induce photobiomodulation on the keratinocyte cells. 1, 3, and $5mathrm{J}/ mathrm{m}^{2}$ energy densities of 655 nm and 808 nm diode lasers were used, which might affect wound healing mechanism and cell proliferation. The potential stimulating effect of photobiomodulation to promote wound healing and cell proliferation on human keratinocyte cells was analyzed via microscopic imaging of cell morphology, MTT analysis for cell proliferation and scratch assay for wound closure after light applications. The highest increase in cell viability was obtained with a rate of 112.6% after the triple treatment of 655-nm wavelength at 1 J/cm2. The best wound closure was achieved with a rate of 45% after the triple treatment of 655 nm wavelength at 3 J/cm2. This study revealed that PBM with 655-nm of wavelength was an effective tool to induce cell proliferation and speed up the wound healing process with specific energy doses.","PeriodicalId":426945,"journal":{"name":"2020 Medical Technologies Congress (TIPTEKNO)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125618523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Classification and Statistical Analysis of Schizophrenic and Normal EEG Time Series 精神分裂症与正常人脑电图时间序列的分类与统计分析
Pub Date : 2020-11-19 DOI: 10.1109/TIPTEKNO50054.2020.9299246
Delal Şeker, M. S. Özerdem
In this study, discrimination of normal and schizophrenic EEG is aimed by using lineer features with different classifiers. Fort his purpose, 1 minutes of EEG records through 16 channels were recorded from 39 normal and 39 schizophrenia patients and minimum, maximum, mean, standard deviation and median feautes were extracted from these records. k-neighbors, Multi-layer perceptron, support vector machines and Random forest classifier were applied to feature vectors extracted from each channel. Highest classification accuracy is reached to 99.95% in proposed work. While MLP seems to be best classifier, channel C4 is observed most relevant to discriminate schizophrenic EEG from healthy control group. As a result of independent sample t-test and Mann-Whitney U Test for the purpose of statistical analysis, there is a distinct statistical significance for whole channels.When considering proposed work, obtained results are so promising and make contributions to literatüre view according to related works.
在本研究中,利用不同分类器的线性特征对正常脑电图和精神分裂症脑电图进行区分。为此,我们分别记录了39例正常人和39例精神分裂症患者16个通道的1分钟脑电图记录,并从中提取最小、最大、平均值、标准差和中位数特征。将k-邻域、多层感知器、支持向量机和随机森林分类器应用于每个通道提取的特征向量。该方法的分类准确率最高可达99.95%。虽然MLP似乎是最好的分类器,但C4通道与区分精神分裂症脑电图与健康对照组最相关。通过独立样本t检验和Mann-Whitney U检验进行统计分析,整个渠道的统计显著性显著。在考虑拟开展的工作时,所获得的结果非常有希望,并根据相关工作对文献综述做出贡献。
{"title":"Classification and Statistical Analysis of Schizophrenic and Normal EEG Time Series","authors":"Delal Şeker, M. S. Özerdem","doi":"10.1109/TIPTEKNO50054.2020.9299246","DOIUrl":"https://doi.org/10.1109/TIPTEKNO50054.2020.9299246","url":null,"abstract":"In this study, discrimination of normal and schizophrenic EEG is aimed by using lineer features with different classifiers. Fort his purpose, 1 minutes of EEG records through 16 channels were recorded from 39 normal and 39 schizophrenia patients and minimum, maximum, mean, standard deviation and median feautes were extracted from these records. k-neighbors, Multi-layer perceptron, support vector machines and Random forest classifier were applied to feature vectors extracted from each channel. Highest classification accuracy is reached to 99.95% in proposed work. While MLP seems to be best classifier, channel C4 is observed most relevant to discriminate schizophrenic EEG from healthy control group. As a result of independent sample t-test and Mann-Whitney U Test for the purpose of statistical analysis, there is a distinct statistical significance for whole channels.When considering proposed work, obtained results are so promising and make contributions to literatüre view according to related works.","PeriodicalId":426945,"journal":{"name":"2020 Medical Technologies Congress (TIPTEKNO)","volume":"25 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124301766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Brain Tumors via Deep Learning Models 基于深度学习模型的脑肿瘤分类
Pub Date : 2020-11-19 DOI: 10.1109/TIPTEKNO50054.2020.9299231
Kaya Dağlı, O. Eroğul
Brain tumors threathen human health significantly. Misdiagnosis of these tumors decrease effectiveness of decisions for intervention and patient’s state of health. The conventional method to differentiate brain tumors is by the inspection of magnetic resonance images by clinicians. Since there are various types of brain tumors and there are many images that clinicians should examine, this method is both prone to human errors and causes excessive time consumption. In this study, the most common brain tumor types; Glioma, Meningioma and Pituitary are classified using deep learning models. While the main objective of this study is to have a high rate of accuracy, the time spent is also examined. The aim of this study is to ease clinicians work load and have a time efficient classification system. The system which has been built has an accuracy up to 90%.
脑肿瘤严重威胁人类健康。这些肿瘤的误诊会降低干预决策的有效性和患者的健康状况。常规的方法来区分脑肿瘤是由临床医生检查磁共振图像。由于脑肿瘤的类型多种多样,临床医生需要检查的图像也很多,这种方法容易出现人为错误,也会造成过多的时间消耗。在这项研究中,最常见的脑肿瘤类型;神经胶质瘤、脑膜瘤和垂体瘤使用深度学习模型进行分类。虽然本研究的主要目的是要有一个高的准确率,时间花费也检查。本研究的目的是减轻临床医生的工作量,并有一个时间效率的分类系统。该系统的检测精度可达90%。
{"title":"Classification of Brain Tumors via Deep Learning Models","authors":"Kaya Dağlı, O. Eroğul","doi":"10.1109/TIPTEKNO50054.2020.9299231","DOIUrl":"https://doi.org/10.1109/TIPTEKNO50054.2020.9299231","url":null,"abstract":"Brain tumors threathen human health significantly. Misdiagnosis of these tumors decrease effectiveness of decisions for intervention and patient’s state of health. The conventional method to differentiate brain tumors is by the inspection of magnetic resonance images by clinicians. Since there are various types of brain tumors and there are many images that clinicians should examine, this method is both prone to human errors and causes excessive time consumption. In this study, the most common brain tumor types; Glioma, Meningioma and Pituitary are classified using deep learning models. While the main objective of this study is to have a high rate of accuracy, the time spent is also examined. The aim of this study is to ease clinicians work load and have a time efficient classification system. The system which has been built has an accuracy up to 90%.","PeriodicalId":426945,"journal":{"name":"2020 Medical Technologies Congress (TIPTEKNO)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117018853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
TIPTEKNO 2020 Cover Page 2020年封面
Pub Date : 2020-11-19 DOI: 10.1109/tiptekno50054.2020.9299271
{"title":"TIPTEKNO 2020 Cover Page","authors":"","doi":"10.1109/tiptekno50054.2020.9299271","DOIUrl":"https://doi.org/10.1109/tiptekno50054.2020.9299271","url":null,"abstract":"","PeriodicalId":426945,"journal":{"name":"2020 Medical Technologies Congress (TIPTEKNO)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128612995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inception-ResNet-v2 with Leakyrelu and Averagepooling for More Reliable and Accurate Classification of Chest X-ray Images 基于Leakyrelu和平均池的Inception-ResNet-v2胸部x线图像更可靠和准确的分类
Pub Date : 2020-11-19 DOI: 10.1109/TIPTEKNO50054.2020.9299232
Ahmet Demir, F. Yilmaz
Pneumonia is one of the most commonly seen illnesses in the world and its diagnosis needs some expertise. Computer aided diagnosis methods are used extensively in a lot of fields like health care. This study uses Inception-ResNet-v2 deep learning architecture. Classification is done by using this architecture. ReLU activation function seen in network architecture is changed with LeakyReLU activation function and classification task is done. After that, all of the maxpooling layers seen in network architecture is changed with avepooling layers and again classification task is done. Lastly, this seperate changes done in network architecture is combined in one network and again classification task is done with new network architecture. Four experiments are done in total and their results are compared. The best case with a sensitivity value of 93.16% and with a specificity value of 93.59% is obtained in Inception Resnet V2 with together application of LeakyReLU and Averagepooling.
肺炎是世界上最常见的疾病之一,其诊断需要一些专业知识。计算机辅助诊断方法在医疗保健等许多领域得到了广泛的应用。本研究采用Inception-ResNet-v2深度学习架构。分类是通过使用这个体系结构完成的。将网络体系结构中的ReLU激活函数改为LeakyReLU激活函数,完成分类任务。之后,所有在网络体系结构中看到的maxpooling层都被avepooling层所改变,再次完成分类任务。最后,将这些单独的网络结构变化合并到一个网络中,再用新的网络结构完成分类任务。共进行了4次实验,并对实验结果进行了比较。在Inception Resnet V2中,结合LeakyReLU和Averagepooling,获得了灵敏度为93.16%、特异度为93.59%的最佳案例。
{"title":"Inception-ResNet-v2 with Leakyrelu and Averagepooling for More Reliable and Accurate Classification of Chest X-ray Images","authors":"Ahmet Demir, F. Yilmaz","doi":"10.1109/TIPTEKNO50054.2020.9299232","DOIUrl":"https://doi.org/10.1109/TIPTEKNO50054.2020.9299232","url":null,"abstract":"Pneumonia is one of the most commonly seen illnesses in the world and its diagnosis needs some expertise. Computer aided diagnosis methods are used extensively in a lot of fields like health care. This study uses Inception-ResNet-v2 deep learning architecture. Classification is done by using this architecture. ReLU activation function seen in network architecture is changed with LeakyReLU activation function and classification task is done. After that, all of the maxpooling layers seen in network architecture is changed with avepooling layers and again classification task is done. Lastly, this seperate changes done in network architecture is combined in one network and again classification task is done with new network architecture. Four experiments are done in total and their results are compared. The best case with a sensitivity value of 93.16% and with a specificity value of 93.59% is obtained in Inception Resnet V2 with together application of LeakyReLU and Averagepooling.","PeriodicalId":426945,"journal":{"name":"2020 Medical Technologies Congress (TIPTEKNO)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128769465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
期刊
2020 Medical Technologies Congress (TIPTEKNO)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1