首页 > 最新文献

2019 IEEE International Conference on Imaging Systems and Techniques (IST)最新文献

英文 中文
Semi-Automated Image Analysis Methodology to Investigate Intracellular Heterogeneity in Immunohistochemical Stained Sections 半自动化图像分析方法研究免疫组织化学染色切片细胞内异质性
Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010370
R. Hamoudi, S. Hammoudeh, Arab M. Hammoudeh, S. Rawat
The discovery of tissue heterogeneity revolutionized the existing knowledge regarding the cellular, molecular, and pathophysiological mechanisms in biomedicine. Therefore, basic science investigations were redirected to encompass observation at the classical and quantum biology levels. Various approaches have been developed to investigate and capture tissue heterogeneity; however, these approaches are costly and incompatible with all types of samples. In this paper, we propose an approach to quantify heterogeneous cellular populations through combining histology and images processing techniques. In this approach, images of immunohistochemically stained sections are processed through color binning of DAB-stained cells (in brown) and non-stained cells (in blue) to select cellular clusters expressing biomarkers of interest. Subsequently, the images were converted to a binary format through threshold modification (threshold ~ 60%) in the grey scale. The cell count was extrapolated from the binary images using the particle analysis tool in ImageJ. This approach was applied to quantify the level of progesterone receptor expression levels in a breast cancer cell line sample. The results of the proposed approach were found to closely reflect those of manual counting. Through this approach, quantitative measures can be added to qualitative observation of subcellular targets expression.
组织异质性的发现彻底改变了生物医学中关于细胞、分子和病理生理机制的现有知识。因此,基础科学研究被重新定向到包括经典和量子生物学水平的观察。已经开发了各种方法来研究和捕获组织异质性;然而,这些方法是昂贵的和不兼容的所有类型的样品。在本文中,我们提出了一种通过结合组织学和图像处理技术来量化异质细胞群体的方法。在这种方法中,免疫组织化学染色切片的图像通过对dab染色的细胞(棕色)和未染色的细胞(蓝色)进行颜色分形处理,以选择表达感兴趣的生物标志物的细胞簇。随后,通过灰度阈值修改(阈值~ 60%)将图像转换为二值格式。使用ImageJ中的粒子分析工具从二值图像中推断细胞计数。这种方法被应用于乳腺癌细胞系样本中孕酮受体表达水平的量化。结果表明,该方法能较好地反映人工计数的结果。通过这种方法,可以在定性观察亚细胞靶点表达的基础上增加定量手段。
{"title":"Semi-Automated Image Analysis Methodology to Investigate Intracellular Heterogeneity in Immunohistochemical Stained Sections","authors":"R. Hamoudi, S. Hammoudeh, Arab M. Hammoudeh, S. Rawat","doi":"10.1109/IST48021.2019.9010370","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010370","url":null,"abstract":"The discovery of tissue heterogeneity revolutionized the existing knowledge regarding the cellular, molecular, and pathophysiological mechanisms in biomedicine. Therefore, basic science investigations were redirected to encompass observation at the classical and quantum biology levels. Various approaches have been developed to investigate and capture tissue heterogeneity; however, these approaches are costly and incompatible with all types of samples. In this paper, we propose an approach to quantify heterogeneous cellular populations through combining histology and images processing techniques. In this approach, images of immunohistochemically stained sections are processed through color binning of DAB-stained cells (in brown) and non-stained cells (in blue) to select cellular clusters expressing biomarkers of interest. Subsequently, the images were converted to a binary format through threshold modification (threshold ~ 60%) in the grey scale. The cell count was extrapolated from the binary images using the particle analysis tool in ImageJ. This approach was applied to quantify the level of progesterone receptor expression levels in a breast cancer cell line sample. The results of the proposed approach were found to closely reflect those of manual counting. Through this approach, quantitative measures can be added to qualitative observation of subcellular targets expression.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"203 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132963330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A New System for Lung Cancer Diagnosis based on the Integration of Global and Local CT Features 基于全局和局部CT特征集成的肺癌诊断新系统
Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010466
A. Shaffie, A. Soliman, H. A. Khalifeh, F. Taher, M. Ghazal, N. Dunlap, Adel Said Elmaghraby, R. Keynton, A. El-Baz
Lung cancer leads deaths caused by cancer for both men and women worldwide, that is why creating systems for early diagnosis with machine learning algorithms and nominal user intervention is of huge importance. In this manuscript, a new system for lung nodule diagnosis, using features extracted from one computed tomography (CT) scan, is presented. This system integrates global and local features to give an implication of the nodule prior growth rate, which is the main point for diagnosis of pulmonary nodules. 3D adjustable local binary pattern and some basic geometric features are used to extract the nodule global features, and the local features are extracted using 3D convolutional neural networks (3D-CNN) because of its ability to exploit the spatial correlation of input data in an efficient way. Finally all these features are integrated using autoencoder to give a final diagnosis for the lung nodule whether benign or malignant. The system was evaluated using 727 nodules extracted from the Lung Image Database Consortium (LIDC) dataset. The proposed system diagnosis accuracy, sensitivity, and specificity were 92.20%,93.55%, and 91.20% respectively. The proposed framework demonstrated its promise as a valuable tool for lung cancer detection evidenced by its higher accuracy.
肺癌是全球男性和女性癌症死亡的主要原因,这就是为什么用机器学习算法和名义上的用户干预创建早期诊断系统非常重要。在这篇手稿中,提出了一个新的肺结节诊断系统,使用从一次计算机断层扫描(CT)中提取的特征。该系统综合了整体和局部特征,给出了结节的早期生长速度,这是诊断肺结节的要点。利用三维可调局部二值模式和一些基本几何特征提取结节全局特征,利用三维卷积神经网络(3D- cnn)有效利用输入数据的空间相关性提取结节局部特征。最后,利用自编码器将所有这些特征综合起来,对肺结节的良性或恶性进行最终诊断。该系统使用从肺图像数据库联盟(LIDC)数据集中提取的727个结节进行评估。该系统的诊断准确率、灵敏度和特异性分别为92.20%、93.55%和91.20%。该框架以其较高的准确性证明了其作为肺癌检测的宝贵工具的前景。
{"title":"A New System for Lung Cancer Diagnosis based on the Integration of Global and Local CT Features","authors":"A. Shaffie, A. Soliman, H. A. Khalifeh, F. Taher, M. Ghazal, N. Dunlap, Adel Said Elmaghraby, R. Keynton, A. El-Baz","doi":"10.1109/IST48021.2019.9010466","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010466","url":null,"abstract":"Lung cancer leads deaths caused by cancer for both men and women worldwide, that is why creating systems for early diagnosis with machine learning algorithms and nominal user intervention is of huge importance. In this manuscript, a new system for lung nodule diagnosis, using features extracted from one computed tomography (CT) scan, is presented. This system integrates global and local features to give an implication of the nodule prior growth rate, which is the main point for diagnosis of pulmonary nodules. 3D adjustable local binary pattern and some basic geometric features are used to extract the nodule global features, and the local features are extracted using 3D convolutional neural networks (3D-CNN) because of its ability to exploit the spatial correlation of input data in an efficient way. Finally all these features are integrated using autoencoder to give a final diagnosis for the lung nodule whether benign or malignant. The system was evaluated using 727 nodules extracted from the Lung Image Database Consortium (LIDC) dataset. The proposed system diagnosis accuracy, sensitivity, and specificity were 92.20%,93.55%, and 91.20% respectively. The proposed framework demonstrated its promise as a valuable tool for lung cancer detection evidenced by its higher accuracy.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126772033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Welcome Message from the Chairman 主席致欢迎辞
Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010565
J. Scharcanski
On behalf of the Technical and Local Committee of the 2019 IEEE International Conference on Imaging Systems and Techniques (IST 2019) and IEEE International School on Imaging, I welcome you to Abu Dhabi, UAE.
我谨代表2019年IEEE成像系统与技术国际会议(IST 2019)技术和地方委员会以及IEEE国际成像学院,欢迎您来到阿联酋阿布扎比。
{"title":"Welcome Message from the Chairman","authors":"J. Scharcanski","doi":"10.1109/IST48021.2019.9010565","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010565","url":null,"abstract":"On behalf of the Technical and Local Committee of the 2019 IEEE International Conference on Imaging Systems and Techniques (IST 2019) and IEEE International School on Imaging, I welcome you to Abu Dhabi, UAE.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121143841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Human Activity Recognition Framework Based on Wearable IMU Wrist Sensors 基于可穿戴式IMU腕部传感器的高效人体活动识别框架
Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010115
A. Ayman, Omneya Attallah, H. Shaban
Lately, Human Activity Recognition (HAR) using wearable sensors has received extensive research attention for its great use in the human health performance evaluation across several domain. HAR methods can be embedded in a smart home healthcare model to assist patients and enhance their rehabilitation process. Several types of sensors are currently used for HAR amongst them are wearable wrist sensors, which have a great ability to deliver Valuable information about the patient's grade of ability. Some recent studies have proposed HAR using Machine Learning (ML) techniques. These studies have included non-invasive wearable wrist sensors, such as Accelerometer, Magnetometer and Gyroscope. In this paper, a novel framework for HAR using ML based on sensor-fusion is proposed. Moreover, a feature selection approach to select useful features based on Random Forest (RF), Bagged Decision Tree (DT) and Support Vector Machine (SVM) classifiers is employed. The proposed framework is investigated on two publicly available datasets. Numerical results show that our framework based on sensor-fusion outperforms other methods proposed in the literature.
近年来,基于可穿戴传感器的人体活动识别(HAR)因其在人体健康绩效评估中的广泛应用而受到广泛的研究关注。HAR方法可以嵌入到智能家庭医疗保健模型中,以帮助患者并增强他们的康复过程。目前有几种类型的传感器用于HAR,其中包括可穿戴式手腕传感器,它能够提供有关患者能力等级的有价值信息。最近的一些研究提出了使用机器学习(ML)技术的HAR。这些研究包括非侵入式可穿戴手腕传感器,如加速度计、磁力计和陀螺仪。本文提出了一种基于传感器融合的机器学习的HAR框架。此外,采用随机森林(RF)、袋装决策树(DT)和支持向量机(SVM)分类器的特征选择方法来选择有用的特征。提出的框架在两个公开可用的数据集上进行了调查。数值结果表明,基于传感器融合的框架优于文献中提出的其他方法。
{"title":"An Efficient Human Activity Recognition Framework Based on Wearable IMU Wrist Sensors","authors":"A. Ayman, Omneya Attallah, H. Shaban","doi":"10.1109/IST48021.2019.9010115","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010115","url":null,"abstract":"Lately, Human Activity Recognition (HAR) using wearable sensors has received extensive research attention for its great use in the human health performance evaluation across several domain. HAR methods can be embedded in a smart home healthcare model to assist patients and enhance their rehabilitation process. Several types of sensors are currently used for HAR amongst them are wearable wrist sensors, which have a great ability to deliver Valuable information about the patient's grade of ability. Some recent studies have proposed HAR using Machine Learning (ML) techniques. These studies have included non-invasive wearable wrist sensors, such as Accelerometer, Magnetometer and Gyroscope. In this paper, a novel framework for HAR using ML based on sensor-fusion is proposed. Moreover, a feature selection approach to select useful features based on Random Forest (RF), Bagged Decision Tree (DT) and Support Vector Machine (SVM) classifiers is employed. The proposed framework is investigated on two publicly available datasets. Numerical results show that our framework based on sensor-fusion outperforms other methods proposed in the literature.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114116049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Extraction of Radiomic Features from Breast DCE-MRI Responds to Pathological Changes in Patients During Neoadjuvant Chemotherapy Treatment 乳腺DCE-MRI放射学特征提取对新辅助化疗期间患者病理变化的响应
Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010068
Priscilla Dinkar Moyya, Mythili Asaithambi, A. K. Ramaniharan
Breast cancer disorders are leading cause of morbidity and mortality worldwide. Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is the most common method of assessing the response to chemotherapy in breast cancer treatment monitoring. Radiomic features obtained from MR images have potential in reflecting the tumor biology. In this work, an attempt has been made to investigate the clinical potential of breast DCE-MRI derived radiomic features and its response to Neoadjuvant Chemotherapy (NAC). The data used in this study (10 Patients with 20 studies (Visit-1 & Visit-2) were obtained from public domain Quantitative Imaging Network (QIN) Breast DCE-MRI database. Using Mazda software, the radiomic features were extracted from whole breast region to quantify the pathological variations during visit-1 and visit-2. Totally, 176 texture and shape features were extracted and analyzed statistically using student's t test. Result shows that, the radiomic features were able to differentiate the variations in the tumor biology during visit-1 and visit-2 due to NAC. The features such as GeoW2, GeoW3, GeoW4, GeoRs, GeoRc, GeoRm, 50 percentile of histogram intensity and Theta1 were found to be statistically significant with p values ranging from 0.03 to 0.08. Hence it appears that, the radiomic features could be used as adjunct measure in reflecting the pathological response during NAC and thus this study seems to be clinically significant.
乳腺癌疾病是全世界发病率和死亡率的主要原因。动态对比增强磁共振成像(DCE-MRI)是乳腺癌治疗监测中最常用的评估化疗反应的方法。从磁共振图像中获得的放射组学特征在反映肿瘤生物学方面具有潜力。在这项工作中,我们试图探讨乳腺DCE-MRI衍生的放射学特征及其对新辅助化疗(NAC)的反应的临床潜力。本研究使用的数据(10例患者,20项研究(Visit-1和Visit-2))来自公共领域定量成像网络(QIN)乳腺DCE-MRI数据库。利用Mazda软件提取全乳区域放射学特征,量化第一次和第二次访视期间的病理变化。共提取了176个纹理和形状特征,并采用学生t检验进行统计分析。结果表明,放射组学特征能够区分NAC在访视1和访视2期间的肿瘤生物学变化。GeoW2、GeoW3、GeoW4、GeoRs、GeoRc、GeoRm、直方图强度50%百分位、Theta1等特征均具有统计学意义,p值在0.03 ~ 0.08之间。因此,放射学特征可以作为反映NAC病理反应的辅助指标,因此本研究具有临床意义。
{"title":"Extraction of Radiomic Features from Breast DCE-MRI Responds to Pathological Changes in Patients During Neoadjuvant Chemotherapy Treatment","authors":"Priscilla Dinkar Moyya, Mythili Asaithambi, A. K. Ramaniharan","doi":"10.1109/IST48021.2019.9010068","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010068","url":null,"abstract":"Breast cancer disorders are leading cause of morbidity and mortality worldwide. Dynamic Contrast Enhanced Magnetic Resonance Imaging (DCE-MRI) is the most common method of assessing the response to chemotherapy in breast cancer treatment monitoring. Radiomic features obtained from MR images have potential in reflecting the tumor biology. In this work, an attempt has been made to investigate the clinical potential of breast DCE-MRI derived radiomic features and its response to Neoadjuvant Chemotherapy (NAC). The data used in this study (10 Patients with 20 studies (Visit-1 & Visit-2) were obtained from public domain Quantitative Imaging Network (QIN) Breast DCE-MRI database. Using Mazda software, the radiomic features were extracted from whole breast region to quantify the pathological variations during visit-1 and visit-2. Totally, 176 texture and shape features were extracted and analyzed statistically using student's t test. Result shows that, the radiomic features were able to differentiate the variations in the tumor biology during visit-1 and visit-2 due to NAC. The features such as GeoW2, GeoW3, GeoW4, GeoRs, GeoRc, GeoRm, 50 percentile of histogram intensity and Theta1 were found to be statistically significant with p values ranging from 0.03 to 0.08. Hence it appears that, the radiomic features could be used as adjunct measure in reflecting the pathological response during NAC and thus this study seems to be clinically significant.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116145539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Studies on a Video Surveillance System Designed for Deep Learning 基于深度学习的视频监控系统研究
Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010234
Chunfang Xue, Peng Liu, Weiping Liu
This paper proposes a new video surveillance system designed for Deep Learning. The proposed system uses three steps to transfer RTSP streams to pictures for Deep Learning. First it decapsulates the streams, then decodes and converts color space & extracts frames. The proposed system has two ways to decode RTSP streams, hardware decoding and software decoding. By checking the processor's version of CPU firstly, system chooses a better way to decode. The proposed system has GPU and CPU. CPU is used to process RTSP streams, extract frames and do human-machine interaction. GPU is used for computing and analyzing the algorithms of Deep Learning. So the complex computing does not run on the CPU. The proposed system runs on Linux system and has Python interface, so it can easily connect with the models of Deep Learning. By running on multiple machines, the result shows that the proposed system can process up to 16 channels of stream. After 7*24 hours of testing on several machines, this system can run continuously without downtime and the delay time is less than 7 seconds.
本文提出了一种新的基于深度学习的视频监控系统。该系统使用三个步骤将RTSP流转换为用于深度学习的图片。首先对流进行解封装,然后对色彩空间进行解码和转换,提取帧。该系统采用硬件译码和软件译码两种译码方式对RTSP流进行译码。系统首先检查CPU的处理器版本,选择较好的解码方式。该系统具有GPU和CPU。CPU处理RTSP流,提取帧,进行人机交互。GPU用于深度学习算法的计算和分析。所以复杂的计算不会在CPU上运行。该系统运行在Linux系统上,具有Python接口,可以方便地与深度学习模型连接。通过在多台机器上运行,结果表明该系统可以处理多达16个通道的数据流。经过多台机器7*24小时的测试,该系统可以不停机连续运行,延迟时间小于7秒。
{"title":"Studies on a Video Surveillance System Designed for Deep Learning","authors":"Chunfang Xue, Peng Liu, Weiping Liu","doi":"10.1109/IST48021.2019.9010234","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010234","url":null,"abstract":"This paper proposes a new video surveillance system designed for Deep Learning. The proposed system uses three steps to transfer RTSP streams to pictures for Deep Learning. First it decapsulates the streams, then decodes and converts color space & extracts frames. The proposed system has two ways to decode RTSP streams, hardware decoding and software decoding. By checking the processor's version of CPU firstly, system chooses a better way to decode. The proposed system has GPU and CPU. CPU is used to process RTSP streams, extract frames and do human-machine interaction. GPU is used for computing and analyzing the algorithms of Deep Learning. So the complex computing does not run on the CPU. The proposed system runs on Linux system and has Python interface, so it can easily connect with the models of Deep Learning. By running on multiple machines, the result shows that the proposed system can process up to 16 channels of stream. After 7*24 hours of testing on several machines, this system can run continuously without downtime and the delay time is less than 7 seconds.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115418164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Driver Fatigue Detection with Single EEG Channel Using Transfer Learning 基于迁移学习的单脑电通道驾驶员疲劳检测
Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010483
W. Shalash
Decreasing road accidents rate and increasing road safety have been the major concerns for a long time as traffic accidents expose the divers, passengers, properties to danger. Driver fatigue and drowsiness are one of the most critical factors affecting road safety, especially on highways. EEG signal is one of the reliable physiological signals used to perceive driver fatigue state but wearing a multi-channel headset to acquire the EEG signal limits the EEG based systems among drivers. The current work suggested using a driver fatigue detection system using transfer learning, depending only on one EEG channel to increase system usability. The system firstly acquires the signal and passing it through preprocessing filtering then, converts it to a 2D spectrogram. Finally, the 2D spectrogram is classified with AlexNet using transfer learning to classify it either normal or fatigue state. The current study compares the accuracy of seven EEG channel to select one of them as the most accurate channel to depend on it for classification. The results show that the channels FP1 and T3 are the most effective channels to indicate the drive fatigue state. They achieved an accuracy of 90% and 91% respectively. Therefore, using only one of these channels with the modified AlexNet CNN model can result in an efficient driver fatigue detection system.
长期以来,降低道路事故率和提高道路安全一直是人们关注的主要问题,因为交通事故使驾驶员、乘客和财产处于危险之中。驾驶员疲劳和困倦是影响道路安全的最关键因素之一,特别是在高速公路上。脑电信号是驾驶员疲劳状态感知的可靠生理信号之一,但多通道头戴设备对脑电信号的采集限制了基于脑电信号的系统在驾驶员中的应用。目前的工作建议使用一个使用迁移学习的驾驶员疲劳检测系统,仅依赖于一个EEG通道来提高系统的可用性。该系统首先采集信号并进行预处理滤波,然后将其转换为二维频谱图。最后,使用AlexNet对二维频谱图进行分类,使用迁移学习将其分类为正常状态或疲劳状态。本研究通过对7个脑电信号通道的准确率进行比较,从中选择一个最准确的通道作为分类依据。结果表明,FP1和T3通道是表征驱动疲劳状态最有效的通道。它们分别达到了90%和91%的准确率。因此,仅使用这些通道中的一个与改进的AlexNet CNN模型可以产生一个有效的驾驶员疲劳检测系统。
{"title":"Driver Fatigue Detection with Single EEG Channel Using Transfer Learning","authors":"W. Shalash","doi":"10.1109/IST48021.2019.9010483","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010483","url":null,"abstract":"Decreasing road accidents rate and increasing road safety have been the major concerns for a long time as traffic accidents expose the divers, passengers, properties to danger. Driver fatigue and drowsiness are one of the most critical factors affecting road safety, especially on highways. EEG signal is one of the reliable physiological signals used to perceive driver fatigue state but wearing a multi-channel headset to acquire the EEG signal limits the EEG based systems among drivers. The current work suggested using a driver fatigue detection system using transfer learning, depending only on one EEG channel to increase system usability. The system firstly acquires the signal and passing it through preprocessing filtering then, converts it to a 2D spectrogram. Finally, the 2D spectrogram is classified with AlexNet using transfer learning to classify it either normal or fatigue state. The current study compares the accuracy of seven EEG channel to select one of them as the most accurate channel to depend on it for classification. The results show that the channels FP1 and T3 are the most effective channels to indicate the drive fatigue state. They achieved an accuracy of 90% and 91% respectively. Therefore, using only one of these channels with the modified AlexNet CNN model can result in an efficient driver fatigue detection system.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"233 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115502339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Big Data driven U-Net based Electrical Capacitance Image Reconstruction Algorithm 基于大数据驱动的U-Net电容图像重构算法
Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010423
Xinmeng Yang, Chaojie Zhao, Bing Chen, Maomao Zhang, Yi Li
An efficiency electrical capacitance image reconstruction method which combines fully connected neural network and U-Net structure, is put forward for the first time in electrical capacitance tomography (ECT) area in this paper. Since the target of ECT image reconstruction can also be considered as an image segmentation problem-which U-Net structure is designed for. In this paper, the Convolutional Neural Network (CNN) based U-Net structure is used to improve the quality of images reconstructed by ECT. Firstly, about 60,000 data samples with different patterns are generated by the cosimulation of COMSOL Multiphysic and MATLAB. Then a fully connected neural network (FC) is used to pre-process these samples to get initial reconstructions which are not accurate enough. Finally, U-Net structure is used to further process those pre-trained images and will output reconstructed images with both high speed and quality. The robustness, generalization and practicability ability of the U-Net structure is proved. As stated in Section2, it illustrates that U-Net structure matches properly with ECT image reconstruction problems due to its antoencoder strcture. The preliminary results show that the image reconstruction results obtained by the U-Net network are much better than that of the fully connected neural network algorithm, the traditional Linear back projection (LBP) algorithm and the Landweber iteration method.
本文首次在电容层析成像(ECT)领域提出了一种将全连接神经网络与U-Net结构相结合的高效电容图像重建方法。由于电痉挛图像重建的目标也可以看作是一个图像分割问题,而U-Net结构就是为这个问题而设计的。本文采用基于卷积神经网络(CNN)的U-Net结构来提高电痉挛重建图像的质量。首先,通过COMSOL multiphysics和MATLAB的联合仿真,生成了约6万个具有不同模式的数据样本;然后利用全连接神经网络(FC)对这些样本进行预处理,得到精度不够高的初始重构结果。最后,利用U-Net结构对预训练后的图像进行进一步处理,输出高速度、高质量的重建图像。验证了U-Net结构的鲁棒性、通用性和实用性。如第2节所述,它说明了U-Net结构由于其反编码器结构而与ECT图像重建问题相匹配。初步结果表明,U-Net网络获得的图像重建效果明显优于全连接神经网络算法、传统的线性反投影(LBP)算法和Landweber迭代法。
{"title":"Big Data driven U-Net based Electrical Capacitance Image Reconstruction Algorithm","authors":"Xinmeng Yang, Chaojie Zhao, Bing Chen, Maomao Zhang, Yi Li","doi":"10.1109/IST48021.2019.9010423","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010423","url":null,"abstract":"An efficiency electrical capacitance image reconstruction method which combines fully connected neural network and U-Net structure, is put forward for the first time in electrical capacitance tomography (ECT) area in this paper. Since the target of ECT image reconstruction can also be considered as an image segmentation problem-which U-Net structure is designed for. In this paper, the Convolutional Neural Network (CNN) based U-Net structure is used to improve the quality of images reconstructed by ECT. Firstly, about 60,000 data samples with different patterns are generated by the cosimulation of COMSOL Multiphysic and MATLAB. Then a fully connected neural network (FC) is used to pre-process these samples to get initial reconstructions which are not accurate enough. Finally, U-Net structure is used to further process those pre-trained images and will output reconstructed images with both high speed and quality. The robustness, generalization and practicability ability of the U-Net structure is proved. As stated in Section2, it illustrates that U-Net structure matches properly with ECT image reconstruction problems due to its antoencoder strcture. The preliminary results show that the image reconstruction results obtained by the U-Net network are much better than that of the fully connected neural network algorithm, the traditional Linear back projection (LBP) algorithm and the Landweber iteration method.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123104885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Anomaly Detection Combining Discriminative and Generative Models 结合判别和生成模型的异常检测
Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010139
Kyota Higa, Hideaki Sato, Soma Shiraishi, Katsumi Kikuchi, K. Iwamoto
This paper proposes a method to accurately detect anomaly from an image by combining features extracted by discriminative and generative models. Automatic anomaly detection is a key factor for reducing operation costs of visual inspection in a wide range of domains. The proposed method consists of three sub-networks. The first sub-network is convolutional neural networks as a discriminative model for extracting features to distinguish between anomaly and normal. The second subnetwork is a variational autoencoder as a generative model to extract features representing normal. The third sub-network is neural networks to discriminate between anomaly and normal on the basis of features from the discriminative and generative models. Experiments were conducted using pseudo anomalous images generated by superimposing anomaly which was manually extracted from real images. Results of the experiments show that the proposed method improves the area under the curve by 0.08-0.33 points compared with that of a conventional method. With high accuracy, automatic visual inspection systems can be implemented for reducing operation costs.
本文提出了一种结合判别模型和生成模型提取的特征来准确检测图像异常的方法。在广泛的领域中,自动异常检测是降低视觉检测运行成本的关键因素。该方法由三个子网组成。第一个子网络是卷积神经网络作为一种判别模型,用于提取特征以区分异常和正常。第二个子网络是一个变分自编码器,作为一个生成模型来提取代表常态的特征。第三个子网络是基于判别模型和生成模型的特征来区分异常和正常的神经网络。利用人工提取的真实图像上的异常叠加生成的伪异常图像进行实验。实验结果表明,与传统方法相比,该方法的曲线下面积提高了0.08 ~ 0.33个点。高精度的自动目视检测系统可用于降低操作成本。
{"title":"Anomaly Detection Combining Discriminative and Generative Models","authors":"Kyota Higa, Hideaki Sato, Soma Shiraishi, Katsumi Kikuchi, K. Iwamoto","doi":"10.1109/IST48021.2019.9010139","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010139","url":null,"abstract":"This paper proposes a method to accurately detect anomaly from an image by combining features extracted by discriminative and generative models. Automatic anomaly detection is a key factor for reducing operation costs of visual inspection in a wide range of domains. The proposed method consists of three sub-networks. The first sub-network is convolutional neural networks as a discriminative model for extracting features to distinguish between anomaly and normal. The second subnetwork is a variational autoencoder as a generative model to extract features representing normal. The third sub-network is neural networks to discriminate between anomaly and normal on the basis of features from the discriminative and generative models. Experiments were conducted using pseudo anomalous images generated by superimposing anomaly which was manually extracted from real images. Results of the experiments show that the proposed method improves the area under the curve by 0.08-0.33 points compared with that of a conventional method. With high accuracy, automatic visual inspection systems can be implemented for reducing operation costs.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"375 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124687338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Classification of different vehicles in traffic using RGB and Depth images: A Fast RCNN Approach 使用RGB和深度图像对交通中不同车辆进行分类:一种快速RCNN方法
Pub Date : 2019-12-01 DOI: 10.1109/IST48021.2019.9010357
Mohan Kashyap Pargi, B. Setiawan, Y. Kazama
The Fast RCNN framework utilizes the region proposals generated from the RGB images in general for object classification and detection. This paper describes about the vehicle classification employing the Fast RCNN framework and utilizing the information provided from the combination of depth images and RGB images in the form of region proposals for object detection and classification. We use this underlying system architecture to perform evaluation on the Indian and Thailand vehicle traffic datasets. Overall, we achieve a mAP of 72.91% using RGB region proposals, and mAP of 73.77% using RGB combined with depth proposals, for the Indian dataset; and mAP of 80.61% on RGB region proposals, and mAP of 81.25% on RGB combined with depth region proposals, for the Thailand dataset. Our results show that RGB combined with depth region proposals mAP performance is slightly better than the region proposals generated using RGB images only. Furthermore, we provide insights on the performance of AP(Average Precision) for each vehicle on Thailand dataset and how effective region proposals generation is crucial for object detection using the FastRCNN framework.
Fast RCNN框架通常利用RGB图像生成的区域建议进行对象分类和检测。本文介绍了采用Fast RCNN框架,利用深度图像和RGB图像结合提供的信息,以区域建议的形式进行目标检测和分类的车辆分类。我们使用这个底层系统架构对印度和泰国的车辆交通数据集进行评估。总体而言,我们使用RGB区域建议实现了72.91%的mAP,使用RGB结合深度建议实现了73.77%的mAP。对于泰国数据集,RGB区域建议的mAP值为80.61%,RGB结合深度区域建议的mAP值为81.25%。我们的研究结果表明,RGB结合深度区域建议的mAP性能略好于仅使用RGB图像生成的区域建议。此外,我们提供了关于泰国数据集上每辆车的AP(平均精度)性能的见解,以及使用FastRCNN框架生成有效的区域建议对目标检测的重要性。
{"title":"Classification of different vehicles in traffic using RGB and Depth images: A Fast RCNN Approach","authors":"Mohan Kashyap Pargi, B. Setiawan, Y. Kazama","doi":"10.1109/IST48021.2019.9010357","DOIUrl":"https://doi.org/10.1109/IST48021.2019.9010357","url":null,"abstract":"The Fast RCNN framework utilizes the region proposals generated from the RGB images in general for object classification and detection. This paper describes about the vehicle classification employing the Fast RCNN framework and utilizing the information provided from the combination of depth images and RGB images in the form of region proposals for object detection and classification. We use this underlying system architecture to perform evaluation on the Indian and Thailand vehicle traffic datasets. Overall, we achieve a mAP of 72.91% using RGB region proposals, and mAP of 73.77% using RGB combined with depth proposals, for the Indian dataset; and mAP of 80.61% on RGB region proposals, and mAP of 81.25% on RGB combined with depth region proposals, for the Thailand dataset. Our results show that RGB combined with depth region proposals mAP performance is slightly better than the region proposals generated using RGB images only. Furthermore, we provide insights on the performance of AP(Average Precision) for each vehicle on Thailand dataset and how effective region proposals generation is crucial for object detection using the FastRCNN framework.","PeriodicalId":117219,"journal":{"name":"2019 IEEE International Conference on Imaging Systems and Techniques (IST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124243746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2019 IEEE International Conference on Imaging Systems and Techniques (IST)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1