首页 > 最新文献

International Symposium on Image Computing and Digital Medicine最新文献

英文 中文
Multitask Learning for Pathomorphology Recognition of Squamous Intraepithelial Lesion in Thinprep Cytologic Test 薄壁细胞学检查中鳞状上皮内病变病理形态识别的多任务学习
Pub Date : 2018-10-13 DOI: 10.1145/3285996.3286013
Li Liu, Yuanhua Wang, Dongdong Wu, Yongping Zhai, L. Tan, Jingjing Xiao
This paper presents a multitask learning network for pathomorphology recognition of squamous intraepithelial lesion in Thinprep Cytologic Test. Detecting pathological cells is a quite challenging task due to large variations in cell appearance and indistinguishable changes in pathological cells. In addition, the high resolution of scanned cell images poses a further demand for efficient detection algorithm. Therefore, we propose a multi-task learning network aims at keeping a good balance between performance and computational efficiency. First we transfer knowledge from pre-trained VGG16 network to extract low level features, which alleviate the problem caused by small training data. Then, the potential regions of interest are generated by our proposed task oriented anchor network. Finally, a fully convolutional network is applied to accurately estimate the positions of the cells and classify its corresponding labels. To demonstrate the effectiveness of the proposed method, we conducted a dataset which is cross verified by two pathologists. In the test, we compare our method to the state-of-the-art detection algorithms, i.e. YOLO [1], and Faster-rcnn [2], which were both re-trained using our dataset. The results show that our method achieves the best detection accuracy with high computational efficiency, which only takes half time compared to Faster-rcnn.
本文提出了一个用于薄鳞细胞学检查中鳞状上皮内病变病理形态学识别的多任务学习网络。病理细胞的检测是一项非常具有挑战性的任务,因为病理细胞的外观变化很大,难以区分。此外,扫描细胞图像的高分辨率对高效的检测算法提出了进一步的要求。因此,我们提出了一种多任务学习网络,旨在保持性能和计算效率之间的良好平衡。首先,我们从预先训练好的VGG16网络中转移知识,提取低级特征,缓解了训练数据少带来的问题。然后,我们提出的面向任务的锚点网络生成感兴趣的潜在区域。最后,利用全卷积网络准确估计细胞的位置,并对其相应的标签进行分类。为了证明所提出方法的有效性,我们进行了一个由两位病理学家交叉验证的数据集。在测试中,我们将我们的方法与最先进的检测算法进行了比较,即YOLO[1]和Faster-rcnn[2],这两种算法都是使用我们的数据集重新训练的。结果表明,该方法在较高的计算效率下达到了最佳的检测精度,与fast -rcnn相比,只需要一半的时间。
{"title":"Multitask Learning for Pathomorphology Recognition of Squamous Intraepithelial Lesion in Thinprep Cytologic Test","authors":"Li Liu, Yuanhua Wang, Dongdong Wu, Yongping Zhai, L. Tan, Jingjing Xiao","doi":"10.1145/3285996.3286013","DOIUrl":"https://doi.org/10.1145/3285996.3286013","url":null,"abstract":"This paper presents a multitask learning network for pathomorphology recognition of squamous intraepithelial lesion in Thinprep Cytologic Test. Detecting pathological cells is a quite challenging task due to large variations in cell appearance and indistinguishable changes in pathological cells. In addition, the high resolution of scanned cell images poses a further demand for efficient detection algorithm. Therefore, we propose a multi-task learning network aims at keeping a good balance between performance and computational efficiency. First we transfer knowledge from pre-trained VGG16 network to extract low level features, which alleviate the problem caused by small training data. Then, the potential regions of interest are generated by our proposed task oriented anchor network. Finally, a fully convolutional network is applied to accurately estimate the positions of the cells and classify its corresponding labels. To demonstrate the effectiveness of the proposed method, we conducted a dataset which is cross verified by two pathologists. In the test, we compare our method to the state-of-the-art detection algorithms, i.e. YOLO [1], and Faster-rcnn [2], which were both re-trained using our dataset. The results show that our method achieves the best detection accuracy with high computational efficiency, which only takes half time compared to Faster-rcnn.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"89 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115504589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Region-Bias Fitting Model based Level Set for Segmenting Images with Intensity Inhomogeneity 基于区域偏置拟合模型的灰度非均匀图像分割
Pub Date : 2018-10-13 DOI: 10.1145/3285996.3286015
Hai Min, Wei Jia, Yang Zhao
Intensity inhomogeneity is a common phenomenon in real world images, and inevitably leads to many difficulties for accurate image segmentation. This paper proposes a novel region-based model, named Region-Bias Fitting (RBF) model, for segmenting images with intensity inhomogeneity by introducing desirable constraint term based on region bias. Specially, we firstly propose a constraint term which includes both the intensity bias and distance information to constrain the local intensity variance of image. Then, the constraint term is utilized to construct the local bias constraint and determine the contribution of each local region so that the image intensity is fitted accurately. Finally, we use the level set method to construct the final energy functional. By using the novel constraint information, the proposed RBF model can accurately delineate the object boundary, which relies on the local statistical intensity bias and local intensity fitting to improve the segmentation results. In order to validate the effectiveness of the proposed method, we conduct thorough experiments on synthetic and real images. Experimental results show that the proposed RBF model clearly outperforms other models in comparison.
强度不均匀性是现实世界图像中普遍存在的现象,不可避免地给图像的准确分割带来许多困难。本文提出了一种新的基于区域的图像分割模型——区域偏差拟合(RBF)模型,该模型通过引入基于区域偏差的理想约束项来分割具有强度非均匀性的图像。特别地,我们首先提出了包含强度偏差和距离信息的约束项来约束图像的局部强度方差。然后,利用约束项构造局部偏置约束,确定各局部区域的贡献,从而精确拟合图像强度;最后,利用水平集方法构造最终的能量泛函。利用新的约束信息,所提出的RBF模型可以准确地描绘出目标边界,并依靠局部统计强度偏差和局部强度拟合来改善分割结果。为了验证该方法的有效性,我们在合成图像和真实图像上进行了全面的实验。实验结果表明,所提RBF模型的性能明显优于其他模型。
{"title":"A Region-Bias Fitting Model based Level Set for Segmenting Images with Intensity Inhomogeneity","authors":"Hai Min, Wei Jia, Yang Zhao","doi":"10.1145/3285996.3286015","DOIUrl":"https://doi.org/10.1145/3285996.3286015","url":null,"abstract":"Intensity inhomogeneity is a common phenomenon in real world images, and inevitably leads to many difficulties for accurate image segmentation. This paper proposes a novel region-based model, named Region-Bias Fitting (RBF) model, for segmenting images with intensity inhomogeneity by introducing desirable constraint term based on region bias. Specially, we firstly propose a constraint term which includes both the intensity bias and distance information to constrain the local intensity variance of image. Then, the constraint term is utilized to construct the local bias constraint and determine the contribution of each local region so that the image intensity is fitted accurately. Finally, we use the level set method to construct the final energy functional. By using the novel constraint information, the proposed RBF model can accurately delineate the object boundary, which relies on the local statistical intensity bias and local intensity fitting to improve the segmentation results. In order to validate the effectiveness of the proposed method, we conduct thorough experiments on synthetic and real images. Experimental results show that the proposed RBF model clearly outperforms other models in comparison.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130413124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Contour-based Historical Building Image Matching 基于等高线的历史建筑图像匹配
Pub Date : 2018-10-13 DOI: 10.1145/3285996.3286003
Gang Wu, Zhaohe Wang, Jialin Li, Z. Yu, Baiyou Qiao
With the rapid development of the city, huge temporal and spatial changes have taken place in buildings and surrounding scenes at the same location. At present, people generally lack the technical means to understand the knowledge related to the protection of urban architecture, which leads to the lack of publicity and education of the relevant contents. This is the reason why the architectural heritage is gradually forgotten by the public. So it is an effective means to enhance public awareness and protection of urban history through the comparison of images of historical buildings in different periods. In this paper, based on the typical characteristics of urban buildings images, a contour-based historical building image matching algorithm is proposed. We improved edge detection algorithm with a new operator, meanwhile, used a local threshold automatic adjustment strategy. Before matching, we aggregated short lines which can be aggregated to highlight image features and improve the matching rate. The algorithm can accurately match the images of different historical periods with some differences by effectively extracting and matching the building contours. The experiments show that, compared with the comparison algorithm, our proposed algorithm is more sensitive to gradient changes in multiple directions, and has better effects in detail edge extraction.
随着城市的快速发展,同一地点的建筑和周边景观发生了巨大的时空变化。目前,人们普遍缺乏了解城市建筑保护相关知识的技术手段,导致相关内容的宣传教育不足。这就是建筑遗产逐渐被大众遗忘的原因。因此,通过对不同时期历史建筑形象的比较,是提高公众对城市历史认识和保护的有效手段。本文针对城市建筑图像的典型特征,提出了一种基于轮廓的历史建筑图像匹配算法。采用新的算子改进边缘检测算法,同时采用局部阈值自动调整策略。在匹配之前,我们对短线条进行聚合,这些短线条可以被聚合以突出图像特征,提高匹配率。该算法通过对建筑物轮廓的有效提取和匹配,可以对不同历史时期存在一定差异的图像进行精确匹配。实验表明,与比较算法相比,本文提出的算法对多个方向的梯度变化更加敏感,在细节边缘提取方面效果更好。
{"title":"Contour-based Historical Building Image Matching","authors":"Gang Wu, Zhaohe Wang, Jialin Li, Z. Yu, Baiyou Qiao","doi":"10.1145/3285996.3286003","DOIUrl":"https://doi.org/10.1145/3285996.3286003","url":null,"abstract":"With the rapid development of the city, huge temporal and spatial changes have taken place in buildings and surrounding scenes at the same location. At present, people generally lack the technical means to understand the knowledge related to the protection of urban architecture, which leads to the lack of publicity and education of the relevant contents. This is the reason why the architectural heritage is gradually forgotten by the public. So it is an effective means to enhance public awareness and protection of urban history through the comparison of images of historical buildings in different periods. In this paper, based on the typical characteristics of urban buildings images, a contour-based historical building image matching algorithm is proposed. We improved edge detection algorithm with a new operator, meanwhile, used a local threshold automatic adjustment strategy. Before matching, we aggregated short lines which can be aggregated to highlight image features and improve the matching rate. The algorithm can accurately match the images of different historical periods with some differences by effectively extracting and matching the building contours. The experiments show that, compared with the comparison algorithm, our proposed algorithm is more sensitive to gradient changes in multiple directions, and has better effects in detail edge extraction.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128879645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Non-Rigid Point Set Registration via Gaussians Mixture Model with Local Constraints 基于局部约束的高斯混合模型的非刚体点集配准
Pub Date : 2018-10-13 DOI: 10.1145/3285996.3286011
Kai Yang, Xianhui Liu, Yufei Chen, Haotian Zhang, W. Zhao
The local feature of point set is as important as the global feature in the point set registration problem. In this paper, a non-rigid point set registration method based on probability model with local constraints was proposed. Firstly, Gaussian mixture model (GMM) is used to determine the global relationship between two point sets. Secondly, local constraints provided by k nearest neighbor points helps to estimate the transformation better. Thirdly, the transformation of two point sets is calculated in reproducing kernel Hilbert space (RKHS). Finally, expectation maximization (EM) algorithm is used for maximum likelihood estimation of parameters in our method. Comparative experiments on synthesized data show that our algorithm is more robust to distortion, such as deformation, noise and outlier. Our method is also applied to the retinal image registration and obtained very good results.
在点集配准问题中,点集的局部特征与全局特征同等重要。提出了一种基于局部约束概率模型的非刚性点集配准方法。首先,利用高斯混合模型(GMM)确定两个点集之间的全局关系;其次,k个最近邻点提供的局部约束有助于更好地估计变换。第三,在再现核希尔伯特空间(RKHS)中计算两个点集的变换。最后,采用期望最大化算法对参数进行极大似然估计。综合数据的对比实验表明,该算法对变形、噪声和离群值等畸变具有较强的鲁棒性。该方法也应用于视网膜图像配准,取得了很好的效果。
{"title":"Non-Rigid Point Set Registration via Gaussians Mixture Model with Local Constraints","authors":"Kai Yang, Xianhui Liu, Yufei Chen, Haotian Zhang, W. Zhao","doi":"10.1145/3285996.3286011","DOIUrl":"https://doi.org/10.1145/3285996.3286011","url":null,"abstract":"The local feature of point set is as important as the global feature in the point set registration problem. In this paper, a non-rigid point set registration method based on probability model with local constraints was proposed. Firstly, Gaussian mixture model (GMM) is used to determine the global relationship between two point sets. Secondly, local constraints provided by k nearest neighbor points helps to estimate the transformation better. Thirdly, the transformation of two point sets is calculated in reproducing kernel Hilbert space (RKHS). Finally, expectation maximization (EM) algorithm is used for maximum likelihood estimation of parameters in our method. Comparative experiments on synthesized data show that our algorithm is more robust to distortion, such as deformation, noise and outlier. Our method is also applied to the retinal image registration and obtained very good results.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"681 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132953958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of Lung Tissue with Cystic Fibrosis Lung Disease via Deep Convolutional Neural Networks 基于深度卷积神经网络的囊性纤维化肺病肺组织分类
Pub Date : 2018-10-13 DOI: 10.1145/3285996.3286020
Xi Jiang, Hualei Shen
Quantitative classification of disease regions contained in lung tissues obtained from Computed Tomography (CT) scans is one of the key steps to evaluate lesion degrees of Cystic Fibrosis Lung Disease (CFLD). In this paper, we propose a deep Convolutional Neural Network-based (CNN) framework for automatic classification of lung tissues with CFLD. Core of the framework is the integration of deep CNNs into the classification workflow. To train and validate performance of deep CNNs, we build datasets for inspiration CT scans and expiration CT scans, respectively. We employ transfer learning techniques to fine tune parameters of deep CNNs. Specifically, we train Resnet-18 and Resnet-34 and validate the performance on the built datasets. Experimental results in terms of average precision and receiver operating characteristic curve demonstrate effectiveness of deep CNNs for classification of lung tissue with CFLD.
计算机断层扫描(CT)获得肺组织中包含的疾病区域的定量分类是评估囊性纤维化肺病(CFLD)病变程度的关键步骤之一。在本文中,我们提出了一个基于深度卷积神经网络(CNN)的肺组织CFLD自动分类框架。该框架的核心是将深度cnn集成到分类工作流中。为了训练和验证深度cnn的性能,我们分别建立了灵感CT扫描和过期CT扫描的数据集。我们采用迁移学习技术对深度cnn的参数进行微调。具体来说,我们训练了Resnet-18和Resnet-34,并在构建的数据集上验证了性能。从平均精度和接受者工作特征曲线两方面的实验结果表明,深度cnn对CFLD肺组织分类是有效的。
{"title":"Classification of Lung Tissue with Cystic Fibrosis Lung Disease via Deep Convolutional Neural Networks","authors":"Xi Jiang, Hualei Shen","doi":"10.1145/3285996.3286020","DOIUrl":"https://doi.org/10.1145/3285996.3286020","url":null,"abstract":"Quantitative classification of disease regions contained in lung tissues obtained from Computed Tomography (CT) scans is one of the key steps to evaluate lesion degrees of Cystic Fibrosis Lung Disease (CFLD). In this paper, we propose a deep Convolutional Neural Network-based (CNN) framework for automatic classification of lung tissues with CFLD. Core of the framework is the integration of deep CNNs into the classification workflow. To train and validate performance of deep CNNs, we build datasets for inspiration CT scans and expiration CT scans, respectively. We employ transfer learning techniques to fine tune parameters of deep CNNs. Specifically, we train Resnet-18 and Resnet-34 and validate the performance on the built datasets. Experimental results in terms of average precision and receiver operating characteristic curve demonstrate effectiveness of deep CNNs for classification of lung tissue with CFLD.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133741444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Adaptive Sampling for GPU-based 3-D Volume Rendering 基于gpu的三维体绘制自适应采样
Pub Date : 2018-10-13 DOI: 10.1145/3285996.3286002
Chun-han Zhang, Hao Yin, Shanghua Xiao
3-D interactive volume rendering can be rather complicated through conventional ray casting with simple sampling and texture mapping. Owing to the limitation of hardware resources, volume rendering algorithms are considerably time-consuming. Therefore, adaptive sampling technique is proposed to tackle the problem of excessive computational cost. In this paper, considering an optimization in parallelized ray-casting algorithms of volume rendering, we propose an adaptive sampling method, which mainly reduces the number of sampling points through non-linear sampling function. This new method can be rather effective while making a trade-off between performance and rendering quality. Our experimental results demonstrate that the proposed adaptive sampling method can result in high efficiency in computation, and produce high-quality image based on the MSE and SSIM metrics.
通过传统的光线投射和简单的采样和纹理映射,三维交互式体绘制是相当复杂的。由于硬件资源的限制,体绘制算法相当耗时。因此,提出了自适应采样技术来解决计算成本过高的问题。本文针对体绘制中并行光线投射算法的优化问题,提出了一种自适应采样方法,该方法主要通过非线性采样函数减少采样点数量。这种新方法在性能和渲染质量之间进行权衡时相当有效。实验结果表明,本文提出的自适应采样方法可以提高计算效率,并基于MSE和SSIM指标生成高质量的图像。
{"title":"Adaptive Sampling for GPU-based 3-D Volume Rendering","authors":"Chun-han Zhang, Hao Yin, Shanghua Xiao","doi":"10.1145/3285996.3286002","DOIUrl":"https://doi.org/10.1145/3285996.3286002","url":null,"abstract":"3-D interactive volume rendering can be rather complicated through conventional ray casting with simple sampling and texture mapping. Owing to the limitation of hardware resources, volume rendering algorithms are considerably time-consuming. Therefore, adaptive sampling technique is proposed to tackle the problem of excessive computational cost. In this paper, considering an optimization in parallelized ray-casting algorithms of volume rendering, we propose an adaptive sampling method, which mainly reduces the number of sampling points through non-linear sampling function. This new method can be rather effective while making a trade-off between performance and rendering quality. Our experimental results demonstrate that the proposed adaptive sampling method can result in high efficiency in computation, and produce high-quality image based on the MSE and SSIM metrics.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117145023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Image De-noising Method Based on Spatial Autocorrelation 基于空间自相关的自适应图像去噪方法
Pub Date : 2018-10-13 DOI: 10.1145/3285996.3286023
Ronghui Lu, Tzong-Jer Chen
An adaptive image de-noising method based on spatial autocorrelation is proposed to effectively remove image noise and preserve structural information. A residual image is obtained using average filtering and then subtracted from the original image. The high-pass residual image should be a combination of boundary and noise. The autocorrelation of each pixel is calculated on the residual image, and then the image is adaptively filtered based on the autocorrelation values. The results show that Lena adaptive filtering quality is significantly better than global image filtering. This method was also applied to a simulated Huffman phantom PET image for validation and the same results were obtained. The spatial autocorrelation is calculated on the high-pass residual image and then adaptive de-noising is performed. The proposed method will be further developed and applied to image de-noising and image quality improvement.
提出了一种基于空间自相关的自适应图像去噪方法,可以有效地去除图像噪声并保留图像结构信息。通过平均滤波得到残差图像,然后从原始图像中减去残差图像。高通残差图像应该是边界和噪声的结合。在残差图像上计算每个像素点的自相关,然后根据自相关值对图像进行自适应滤波。结果表明,Lena自适应滤波质量明显优于全局图像滤波。将该方法应用于模拟的霍夫曼伪PET图像进行验证,得到了相同的结果。对高通残差图像进行空间自相关计算,然后进行自适应去噪。该方法将进一步发展并应用于图像去噪和图像质量改善。
{"title":"Adaptive Image De-noising Method Based on Spatial Autocorrelation","authors":"Ronghui Lu, Tzong-Jer Chen","doi":"10.1145/3285996.3286023","DOIUrl":"https://doi.org/10.1145/3285996.3286023","url":null,"abstract":"An adaptive image de-noising method based on spatial autocorrelation is proposed to effectively remove image noise and preserve structural information. A residual image is obtained using average filtering and then subtracted from the original image. The high-pass residual image should be a combination of boundary and noise. The autocorrelation of each pixel is calculated on the residual image, and then the image is adaptively filtered based on the autocorrelation values. The results show that Lena adaptive filtering quality is significantly better than global image filtering. This method was also applied to a simulated Huffman phantom PET image for validation and the same results were obtained. The spatial autocorrelation is calculated on the high-pass residual image and then adaptive de-noising is performed. The proposed method will be further developed and applied to image de-noising and image quality improvement.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124765979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Degree Evaluation of Facial Nerve Paralysis by Combining LBP and Gabor Features 结合LBP和Gabor特征评价面神经麻痹程度
Pub Date : 2018-10-13 DOI: 10.1145/3285996.3286028
Fei Xie, Yuchen Ma, Zeting Pan, Xinmin Guo, Jun Liu, G. Gao
Context: Facial paralysis affects both mental and physical health of patients severely. The most of existing researches are based on subjective judgments in evaluating the degree of facial paralysis, regardless that the definition of facial paralysis is ambiguous. This will result in low evaluation accuracy and even misdiagnosis. Objective: We propose a method of assessing the degree of facial paralysis considering static facial asymmetry and dynamic transformation factors. Method: This method compares the differences of the corresponding local areas on both sides of the face, thereby analyzes the asymmetry of the abnormal face effectively. Quantitative assessment of facial asymmetry concerns the following three steps: local facial area location, extraction of asymmetric features, and quantification of asymmetrical bilateral surfaces. We use a combination of static and dynamic quantification to generate facial palsy grading models to assess the extent of facial palsy. Results: We then report an empirical study on 320 pictures of 40 patients. Even the accuracy of the experimental tests do not achieve the ideal effect, it reaches more than 80%.Conclusion: Using our facial paralysis database of 40 patients, the experiment shows that our method gains encouraging effectiveness.
背景:面瘫严重影响患者的身心健康。现有的研究大多是基于主观判断来评估面瘫的程度,而面瘫的定义是模糊的。这将导致评估精度低,甚至误诊。目的:提出一种考虑面部静态不对称和动态变形因素的面瘫程度评估方法。方法:该方法通过对比面部两侧相应局部区域的差异,有效分析异常面部的不对称性。面部不对称的定量评估包括三个步骤:局部面部区域定位、不对称特征提取和不对称双侧表面的量化。我们使用静态和动态量化相结合来生成面瘫分级模型来评估面瘫的程度。结果:我们对40名患者的320张图片进行了实证研究。即使实验测试的准确度没有达到理想的效果,也达到80%以上。结论:利用40例面瘫患者的数据库,实验表明该方法取得了令人鼓舞的效果。
{"title":"Degree Evaluation of Facial Nerve Paralysis by Combining LBP and Gabor Features","authors":"Fei Xie, Yuchen Ma, Zeting Pan, Xinmin Guo, Jun Liu, G. Gao","doi":"10.1145/3285996.3286028","DOIUrl":"https://doi.org/10.1145/3285996.3286028","url":null,"abstract":"Context: Facial paralysis affects both mental and physical health of patients severely. The most of existing researches are based on subjective judgments in evaluating the degree of facial paralysis, regardless that the definition of facial paralysis is ambiguous. This will result in low evaluation accuracy and even misdiagnosis. Objective: We propose a method of assessing the degree of facial paralysis considering static facial asymmetry and dynamic transformation factors. Method: This method compares the differences of the corresponding local areas on both sides of the face, thereby analyzes the asymmetry of the abnormal face effectively. Quantitative assessment of facial asymmetry concerns the following three steps: local facial area location, extraction of asymmetric features, and quantification of asymmetrical bilateral surfaces. We use a combination of static and dynamic quantification to generate facial palsy grading models to assess the extent of facial palsy. Results: We then report an empirical study on 320 pictures of 40 patients. Even the accuracy of the experimental tests do not achieve the ideal effect, it reaches more than 80%.Conclusion: Using our facial paralysis database of 40 patients, the experiment shows that our method gains encouraging effectiveness.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123959752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic Localization and Segmentation of Vertebral Bodies in 3D CT Volumes with Deep Learning 基于深度学习的三维CT体椎体自动定位与分割
Pub Date : 2018-10-13 DOI: 10.1145/3285996.3286005
Dejun Shi, Yaling Pan, Chunlei Liu, Yao Wang, D. Cui, Yong Lu
Automatic localization and segmentation of vertebral bodies in CT volumes bears many clinical utilities, such as shape analysis. Variation in the vertebra appearance, unknown field-of-views, and pathologies impose several challenges for these tasks. Most previous studies targeted the whole vertebra and their algorithms, though were of high accuracy, made high demand on hardware and took longer than feasible in daily clinical practice. We developed a two-step algorithm to localize and segment just vertebral bodies by taking the advantage of the intensity pattern along the front spinal region, as well as GPU accelerations using convolutional neural networks. First, we designed a 2D U-net variants to extract front spinal region, based on which the centroids of vertebra were localized using M-method and 3D region of interests were generated for each vertebra. Second, we developed a 3D U-net with inception module using dilated convolution to segment vertebral bodies in the 3D ROIs. We trained our two U-nets on 61 annotated CT volumes. Tested on three unseen CTs, our methods achieved an identification rate of 92% and detection error 0.74 mm and Dice coefficient of 0.8 for the 3D segmentation using less than 10 seconds per case.
CT体积椎体的自动定位和分割具有许多临床用途,如形状分析。椎体外观的变化,未知的视野和病理给这些任务带来了一些挑战。以往的研究大多针对整个椎体,其算法虽然精度高,但对硬件的要求较高,并且在日常临床实践中耗时较长。我们开发了一种两步算法,通过利用沿脊柱前部区域的强度模式以及使用卷积神经网络的GPU加速来定位和分割椎体。首先,设计二维U-net变体提取前脊柱区域,在此基础上,采用m -法对椎体质心进行定位,生成每个椎体的三维感兴趣区域;其次,我们开发了一个带有初始模块的3D U-net,使用扩展卷积在3D roi中分割椎体。我们在61个带注释的CT卷上训练了两个U-nets。在三个未见的ct上进行测试,我们的方法在每次不到10秒的时间内实现了92%的识别率,0.74 mm的检测误差和0.8的Dice系数。
{"title":"Automatic Localization and Segmentation of Vertebral Bodies in 3D CT Volumes with Deep Learning","authors":"Dejun Shi, Yaling Pan, Chunlei Liu, Yao Wang, D. Cui, Yong Lu","doi":"10.1145/3285996.3286005","DOIUrl":"https://doi.org/10.1145/3285996.3286005","url":null,"abstract":"Automatic localization and segmentation of vertebral bodies in CT volumes bears many clinical utilities, such as shape analysis. Variation in the vertebra appearance, unknown field-of-views, and pathologies impose several challenges for these tasks. Most previous studies targeted the whole vertebra and their algorithms, though were of high accuracy, made high demand on hardware and took longer than feasible in daily clinical practice. We developed a two-step algorithm to localize and segment just vertebral bodies by taking the advantage of the intensity pattern along the front spinal region, as well as GPU accelerations using convolutional neural networks. First, we designed a 2D U-net variants to extract front spinal region, based on which the centroids of vertebra were localized using M-method and 3D region of interests were generated for each vertebra. Second, we developed a 3D U-net with inception module using dilated convolution to segment vertebral bodies in the 3D ROIs. We trained our two U-nets on 61 annotated CT volumes. Tested on three unseen CTs, our methods achieved an identification rate of 92% and detection error 0.74 mm and Dice coefficient of 0.8 for the 3D segmentation using less than 10 seconds per case.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"40 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126796897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Automatic Diagnosing of Infant Hip Dislocation Based on Neural Network 基于神经网络的婴儿髋关节脱位自动诊断
Pub Date : 2018-10-13 DOI: 10.1145/3285996.3286021
Xiang Yu, Dongyun Lin, Weiyao Lan, Bingan Zhong, Ping Lv
In this paper, we propose an automatic diagnosismethod based on neural network to detect the infant hip joint dislocation from its ultrasonic images. The proposed method consists of two procedures including pre-processing of the infant hip joint ultrasonic images and diagnosing via neural network. Pre-processing focuses on extracting regions of interest from the ultrasound images. Then, the extracted result is fed to the trained neural network. Finally, the output of the neural network divides the infant hip into two categories, that is, dislocation or non-dislocation. Experimental results show that our method reaches an accuracy of 97% in total, 100% in specificity and 86% in sensitivitywhich proves that it is capable of clinical detection of infant hip dislocation.
本文提出了一种基于神经网络的婴儿髋关节脱位超声图像自动诊断方法。该方法由婴儿髋关节超声图像预处理和神经网络诊断两部分组成。预处理的重点是从超声图像中提取感兴趣的区域。然后,将提取的结果输入到训练好的神经网络中。最后,神经网络的输出将婴儿髋关节分为脱位和非脱位两类。实验结果表明,该方法的总准确率为97%,特异性为100%,敏感性为86%,证明该方法能够临床检测婴儿髋关节脱位。
{"title":"Automatic Diagnosing of Infant Hip Dislocation Based on Neural Network","authors":"Xiang Yu, Dongyun Lin, Weiyao Lan, Bingan Zhong, Ping Lv","doi":"10.1145/3285996.3286021","DOIUrl":"https://doi.org/10.1145/3285996.3286021","url":null,"abstract":"In this paper, we propose an automatic diagnosismethod based on neural network to detect the infant hip joint dislocation from its ultrasonic images. The proposed method consists of two procedures including pre-processing of the infant hip joint ultrasonic images and diagnosing via neural network. Pre-processing focuses on extracting regions of interest from the ultrasound images. Then, the extracted result is fed to the trained neural network. Finally, the output of the neural network divides the infant hip into two categories, that is, dislocation or non-dislocation. Experimental results show that our method reaches an accuracy of 97% in total, 100% in specificity and 86% in sensitivitywhich proves that it is capable of clinical detection of infant hip dislocation.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125977666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Symposium on Image Computing and Digital Medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1