Li Liu, Yuanhua Wang, Dongdong Wu, Yongping Zhai, L. Tan, Jingjing Xiao
This paper presents a multitask learning network for pathomorphology recognition of squamous intraepithelial lesion in Thinprep Cytologic Test. Detecting pathological cells is a quite challenging task due to large variations in cell appearance and indistinguishable changes in pathological cells. In addition, the high resolution of scanned cell images poses a further demand for efficient detection algorithm. Therefore, we propose a multi-task learning network aims at keeping a good balance between performance and computational efficiency. First we transfer knowledge from pre-trained VGG16 network to extract low level features, which alleviate the problem caused by small training data. Then, the potential regions of interest are generated by our proposed task oriented anchor network. Finally, a fully convolutional network is applied to accurately estimate the positions of the cells and classify its corresponding labels. To demonstrate the effectiveness of the proposed method, we conducted a dataset which is cross verified by two pathologists. In the test, we compare our method to the state-of-the-art detection algorithms, i.e. YOLO [1], and Faster-rcnn [2], which were both re-trained using our dataset. The results show that our method achieves the best detection accuracy with high computational efficiency, which only takes half time compared to Faster-rcnn.
{"title":"Multitask Learning for Pathomorphology Recognition of Squamous Intraepithelial Lesion in Thinprep Cytologic Test","authors":"Li Liu, Yuanhua Wang, Dongdong Wu, Yongping Zhai, L. Tan, Jingjing Xiao","doi":"10.1145/3285996.3286013","DOIUrl":"https://doi.org/10.1145/3285996.3286013","url":null,"abstract":"This paper presents a multitask learning network for pathomorphology recognition of squamous intraepithelial lesion in Thinprep Cytologic Test. Detecting pathological cells is a quite challenging task due to large variations in cell appearance and indistinguishable changes in pathological cells. In addition, the high resolution of scanned cell images poses a further demand for efficient detection algorithm. Therefore, we propose a multi-task learning network aims at keeping a good balance between performance and computational efficiency. First we transfer knowledge from pre-trained VGG16 network to extract low level features, which alleviate the problem caused by small training data. Then, the potential regions of interest are generated by our proposed task oriented anchor network. Finally, a fully convolutional network is applied to accurately estimate the positions of the cells and classify its corresponding labels. To demonstrate the effectiveness of the proposed method, we conducted a dataset which is cross verified by two pathologists. In the test, we compare our method to the state-of-the-art detection algorithms, i.e. YOLO [1], and Faster-rcnn [2], which were both re-trained using our dataset. The results show that our method achieves the best detection accuracy with high computational efficiency, which only takes half time compared to Faster-rcnn.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"89 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115504589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intensity inhomogeneity is a common phenomenon in real world images, and inevitably leads to many difficulties for accurate image segmentation. This paper proposes a novel region-based model, named Region-Bias Fitting (RBF) model, for segmenting images with intensity inhomogeneity by introducing desirable constraint term based on region bias. Specially, we firstly propose a constraint term which includes both the intensity bias and distance information to constrain the local intensity variance of image. Then, the constraint term is utilized to construct the local bias constraint and determine the contribution of each local region so that the image intensity is fitted accurately. Finally, we use the level set method to construct the final energy functional. By using the novel constraint information, the proposed RBF model can accurately delineate the object boundary, which relies on the local statistical intensity bias and local intensity fitting to improve the segmentation results. In order to validate the effectiveness of the proposed method, we conduct thorough experiments on synthetic and real images. Experimental results show that the proposed RBF model clearly outperforms other models in comparison.
{"title":"A Region-Bias Fitting Model based Level Set for Segmenting Images with Intensity Inhomogeneity","authors":"Hai Min, Wei Jia, Yang Zhao","doi":"10.1145/3285996.3286015","DOIUrl":"https://doi.org/10.1145/3285996.3286015","url":null,"abstract":"Intensity inhomogeneity is a common phenomenon in real world images, and inevitably leads to many difficulties for accurate image segmentation. This paper proposes a novel region-based model, named Region-Bias Fitting (RBF) model, for segmenting images with intensity inhomogeneity by introducing desirable constraint term based on region bias. Specially, we firstly propose a constraint term which includes both the intensity bias and distance information to constrain the local intensity variance of image. Then, the constraint term is utilized to construct the local bias constraint and determine the contribution of each local region so that the image intensity is fitted accurately. Finally, we use the level set method to construct the final energy functional. By using the novel constraint information, the proposed RBF model can accurately delineate the object boundary, which relies on the local statistical intensity bias and local intensity fitting to improve the segmentation results. In order to validate the effectiveness of the proposed method, we conduct thorough experiments on synthetic and real images. Experimental results show that the proposed RBF model clearly outperforms other models in comparison.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130413124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gang Wu, Zhaohe Wang, Jialin Li, Z. Yu, Baiyou Qiao
With the rapid development of the city, huge temporal and spatial changes have taken place in buildings and surrounding scenes at the same location. At present, people generally lack the technical means to understand the knowledge related to the protection of urban architecture, which leads to the lack of publicity and education of the relevant contents. This is the reason why the architectural heritage is gradually forgotten by the public. So it is an effective means to enhance public awareness and protection of urban history through the comparison of images of historical buildings in different periods. In this paper, based on the typical characteristics of urban buildings images, a contour-based historical building image matching algorithm is proposed. We improved edge detection algorithm with a new operator, meanwhile, used a local threshold automatic adjustment strategy. Before matching, we aggregated short lines which can be aggregated to highlight image features and improve the matching rate. The algorithm can accurately match the images of different historical periods with some differences by effectively extracting and matching the building contours. The experiments show that, compared with the comparison algorithm, our proposed algorithm is more sensitive to gradient changes in multiple directions, and has better effects in detail edge extraction.
{"title":"Contour-based Historical Building Image Matching","authors":"Gang Wu, Zhaohe Wang, Jialin Li, Z. Yu, Baiyou Qiao","doi":"10.1145/3285996.3286003","DOIUrl":"https://doi.org/10.1145/3285996.3286003","url":null,"abstract":"With the rapid development of the city, huge temporal and spatial changes have taken place in buildings and surrounding scenes at the same location. At present, people generally lack the technical means to understand the knowledge related to the protection of urban architecture, which leads to the lack of publicity and education of the relevant contents. This is the reason why the architectural heritage is gradually forgotten by the public. So it is an effective means to enhance public awareness and protection of urban history through the comparison of images of historical buildings in different periods. In this paper, based on the typical characteristics of urban buildings images, a contour-based historical building image matching algorithm is proposed. We improved edge detection algorithm with a new operator, meanwhile, used a local threshold automatic adjustment strategy. Before matching, we aggregated short lines which can be aggregated to highlight image features and improve the matching rate. The algorithm can accurately match the images of different historical periods with some differences by effectively extracting and matching the building contours. The experiments show that, compared with the comparison algorithm, our proposed algorithm is more sensitive to gradient changes in multiple directions, and has better effects in detail edge extraction.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128879645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Yang, Xianhui Liu, Yufei Chen, Haotian Zhang, W. Zhao
The local feature of point set is as important as the global feature in the point set registration problem. In this paper, a non-rigid point set registration method based on probability model with local constraints was proposed. Firstly, Gaussian mixture model (GMM) is used to determine the global relationship between two point sets. Secondly, local constraints provided by k nearest neighbor points helps to estimate the transformation better. Thirdly, the transformation of two point sets is calculated in reproducing kernel Hilbert space (RKHS). Finally, expectation maximization (EM) algorithm is used for maximum likelihood estimation of parameters in our method. Comparative experiments on synthesized data show that our algorithm is more robust to distortion, such as deformation, noise and outlier. Our method is also applied to the retinal image registration and obtained very good results.
{"title":"Non-Rigid Point Set Registration via Gaussians Mixture Model with Local Constraints","authors":"Kai Yang, Xianhui Liu, Yufei Chen, Haotian Zhang, W. Zhao","doi":"10.1145/3285996.3286011","DOIUrl":"https://doi.org/10.1145/3285996.3286011","url":null,"abstract":"The local feature of point set is as important as the global feature in the point set registration problem. In this paper, a non-rigid point set registration method based on probability model with local constraints was proposed. Firstly, Gaussian mixture model (GMM) is used to determine the global relationship between two point sets. Secondly, local constraints provided by k nearest neighbor points helps to estimate the transformation better. Thirdly, the transformation of two point sets is calculated in reproducing kernel Hilbert space (RKHS). Finally, expectation maximization (EM) algorithm is used for maximum likelihood estimation of parameters in our method. Comparative experiments on synthesized data show that our algorithm is more robust to distortion, such as deformation, noise and outlier. Our method is also applied to the retinal image registration and obtained very good results.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"681 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132953958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantitative classification of disease regions contained in lung tissues obtained from Computed Tomography (CT) scans is one of the key steps to evaluate lesion degrees of Cystic Fibrosis Lung Disease (CFLD). In this paper, we propose a deep Convolutional Neural Network-based (CNN) framework for automatic classification of lung tissues with CFLD. Core of the framework is the integration of deep CNNs into the classification workflow. To train and validate performance of deep CNNs, we build datasets for inspiration CT scans and expiration CT scans, respectively. We employ transfer learning techniques to fine tune parameters of deep CNNs. Specifically, we train Resnet-18 and Resnet-34 and validate the performance on the built datasets. Experimental results in terms of average precision and receiver operating characteristic curve demonstrate effectiveness of deep CNNs for classification of lung tissue with CFLD.
{"title":"Classification of Lung Tissue with Cystic Fibrosis Lung Disease via Deep Convolutional Neural Networks","authors":"Xi Jiang, Hualei Shen","doi":"10.1145/3285996.3286020","DOIUrl":"https://doi.org/10.1145/3285996.3286020","url":null,"abstract":"Quantitative classification of disease regions contained in lung tissues obtained from Computed Tomography (CT) scans is one of the key steps to evaluate lesion degrees of Cystic Fibrosis Lung Disease (CFLD). In this paper, we propose a deep Convolutional Neural Network-based (CNN) framework for automatic classification of lung tissues with CFLD. Core of the framework is the integration of deep CNNs into the classification workflow. To train and validate performance of deep CNNs, we build datasets for inspiration CT scans and expiration CT scans, respectively. We employ transfer learning techniques to fine tune parameters of deep CNNs. Specifically, we train Resnet-18 and Resnet-34 and validate the performance on the built datasets. Experimental results in terms of average precision and receiver operating characteristic curve demonstrate effectiveness of deep CNNs for classification of lung tissue with CFLD.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133741444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3-D interactive volume rendering can be rather complicated through conventional ray casting with simple sampling and texture mapping. Owing to the limitation of hardware resources, volume rendering algorithms are considerably time-consuming. Therefore, adaptive sampling technique is proposed to tackle the problem of excessive computational cost. In this paper, considering an optimization in parallelized ray-casting algorithms of volume rendering, we propose an adaptive sampling method, which mainly reduces the number of sampling points through non-linear sampling function. This new method can be rather effective while making a trade-off between performance and rendering quality. Our experimental results demonstrate that the proposed adaptive sampling method can result in high efficiency in computation, and produce high-quality image based on the MSE and SSIM metrics.
{"title":"Adaptive Sampling for GPU-based 3-D Volume Rendering","authors":"Chun-han Zhang, Hao Yin, Shanghua Xiao","doi":"10.1145/3285996.3286002","DOIUrl":"https://doi.org/10.1145/3285996.3286002","url":null,"abstract":"3-D interactive volume rendering can be rather complicated through conventional ray casting with simple sampling and texture mapping. Owing to the limitation of hardware resources, volume rendering algorithms are considerably time-consuming. Therefore, adaptive sampling technique is proposed to tackle the problem of excessive computational cost. In this paper, considering an optimization in parallelized ray-casting algorithms of volume rendering, we propose an adaptive sampling method, which mainly reduces the number of sampling points through non-linear sampling function. This new method can be rather effective while making a trade-off between performance and rendering quality. Our experimental results demonstrate that the proposed adaptive sampling method can result in high efficiency in computation, and produce high-quality image based on the MSE and SSIM metrics.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117145023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An adaptive image de-noising method based on spatial autocorrelation is proposed to effectively remove image noise and preserve structural information. A residual image is obtained using average filtering and then subtracted from the original image. The high-pass residual image should be a combination of boundary and noise. The autocorrelation of each pixel is calculated on the residual image, and then the image is adaptively filtered based on the autocorrelation values. The results show that Lena adaptive filtering quality is significantly better than global image filtering. This method was also applied to a simulated Huffman phantom PET image for validation and the same results were obtained. The spatial autocorrelation is calculated on the high-pass residual image and then adaptive de-noising is performed. The proposed method will be further developed and applied to image de-noising and image quality improvement.
{"title":"Adaptive Image De-noising Method Based on Spatial Autocorrelation","authors":"Ronghui Lu, Tzong-Jer Chen","doi":"10.1145/3285996.3286023","DOIUrl":"https://doi.org/10.1145/3285996.3286023","url":null,"abstract":"An adaptive image de-noising method based on spatial autocorrelation is proposed to effectively remove image noise and preserve structural information. A residual image is obtained using average filtering and then subtracted from the original image. The high-pass residual image should be a combination of boundary and noise. The autocorrelation of each pixel is calculated on the residual image, and then the image is adaptively filtered based on the autocorrelation values. The results show that Lena adaptive filtering quality is significantly better than global image filtering. This method was also applied to a simulated Huffman phantom PET image for validation and the same results were obtained. The spatial autocorrelation is calculated on the high-pass residual image and then adaptive de-noising is performed. The proposed method will be further developed and applied to image de-noising and image quality improvement.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124765979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fei Xie, Yuchen Ma, Zeting Pan, Xinmin Guo, Jun Liu, G. Gao
Context: Facial paralysis affects both mental and physical health of patients severely. The most of existing researches are based on subjective judgments in evaluating the degree of facial paralysis, regardless that the definition of facial paralysis is ambiguous. This will result in low evaluation accuracy and even misdiagnosis. Objective: We propose a method of assessing the degree of facial paralysis considering static facial asymmetry and dynamic transformation factors. Method: This method compares the differences of the corresponding local areas on both sides of the face, thereby analyzes the asymmetry of the abnormal face effectively. Quantitative assessment of facial asymmetry concerns the following three steps: local facial area location, extraction of asymmetric features, and quantification of asymmetrical bilateral surfaces. We use a combination of static and dynamic quantification to generate facial palsy grading models to assess the extent of facial palsy. Results: We then report an empirical study on 320 pictures of 40 patients. Even the accuracy of the experimental tests do not achieve the ideal effect, it reaches more than 80%.Conclusion: Using our facial paralysis database of 40 patients, the experiment shows that our method gains encouraging effectiveness.
{"title":"Degree Evaluation of Facial Nerve Paralysis by Combining LBP and Gabor Features","authors":"Fei Xie, Yuchen Ma, Zeting Pan, Xinmin Guo, Jun Liu, G. Gao","doi":"10.1145/3285996.3286028","DOIUrl":"https://doi.org/10.1145/3285996.3286028","url":null,"abstract":"Context: Facial paralysis affects both mental and physical health of patients severely. The most of existing researches are based on subjective judgments in evaluating the degree of facial paralysis, regardless that the definition of facial paralysis is ambiguous. This will result in low evaluation accuracy and even misdiagnosis. Objective: We propose a method of assessing the degree of facial paralysis considering static facial asymmetry and dynamic transformation factors. Method: This method compares the differences of the corresponding local areas on both sides of the face, thereby analyzes the asymmetry of the abnormal face effectively. Quantitative assessment of facial asymmetry concerns the following three steps: local facial area location, extraction of asymmetric features, and quantification of asymmetrical bilateral surfaces. We use a combination of static and dynamic quantification to generate facial palsy grading models to assess the extent of facial palsy. Results: We then report an empirical study on 320 pictures of 40 patients. Even the accuracy of the experimental tests do not achieve the ideal effect, it reaches more than 80%.Conclusion: Using our facial paralysis database of 40 patients, the experiment shows that our method gains encouraging effectiveness.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"256 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123959752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dejun Shi, Yaling Pan, Chunlei Liu, Yao Wang, D. Cui, Yong Lu
Automatic localization and segmentation of vertebral bodies in CT volumes bears many clinical utilities, such as shape analysis. Variation in the vertebra appearance, unknown field-of-views, and pathologies impose several challenges for these tasks. Most previous studies targeted the whole vertebra and their algorithms, though were of high accuracy, made high demand on hardware and took longer than feasible in daily clinical practice. We developed a two-step algorithm to localize and segment just vertebral bodies by taking the advantage of the intensity pattern along the front spinal region, as well as GPU accelerations using convolutional neural networks. First, we designed a 2D U-net variants to extract front spinal region, based on which the centroids of vertebra were localized using M-method and 3D region of interests were generated for each vertebra. Second, we developed a 3D U-net with inception module using dilated convolution to segment vertebral bodies in the 3D ROIs. We trained our two U-nets on 61 annotated CT volumes. Tested on three unseen CTs, our methods achieved an identification rate of 92% and detection error 0.74 mm and Dice coefficient of 0.8 for the 3D segmentation using less than 10 seconds per case.
{"title":"Automatic Localization and Segmentation of Vertebral Bodies in 3D CT Volumes with Deep Learning","authors":"Dejun Shi, Yaling Pan, Chunlei Liu, Yao Wang, D. Cui, Yong Lu","doi":"10.1145/3285996.3286005","DOIUrl":"https://doi.org/10.1145/3285996.3286005","url":null,"abstract":"Automatic localization and segmentation of vertebral bodies in CT volumes bears many clinical utilities, such as shape analysis. Variation in the vertebra appearance, unknown field-of-views, and pathologies impose several challenges for these tasks. Most previous studies targeted the whole vertebra and their algorithms, though were of high accuracy, made high demand on hardware and took longer than feasible in daily clinical practice. We developed a two-step algorithm to localize and segment just vertebral bodies by taking the advantage of the intensity pattern along the front spinal region, as well as GPU accelerations using convolutional neural networks. First, we designed a 2D U-net variants to extract front spinal region, based on which the centroids of vertebra were localized using M-method and 3D region of interests were generated for each vertebra. Second, we developed a 3D U-net with inception module using dilated convolution to segment vertebral bodies in the 3D ROIs. We trained our two U-nets on 61 annotated CT volumes. Tested on three unseen CTs, our methods achieved an identification rate of 92% and detection error 0.74 mm and Dice coefficient of 0.8 for the 3D segmentation using less than 10 seconds per case.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"40 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126796897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose an automatic diagnosismethod based on neural network to detect the infant hip joint dislocation from its ultrasonic images. The proposed method consists of two procedures including pre-processing of the infant hip joint ultrasonic images and diagnosing via neural network. Pre-processing focuses on extracting regions of interest from the ultrasound images. Then, the extracted result is fed to the trained neural network. Finally, the output of the neural network divides the infant hip into two categories, that is, dislocation or non-dislocation. Experimental results show that our method reaches an accuracy of 97% in total, 100% in specificity and 86% in sensitivitywhich proves that it is capable of clinical detection of infant hip dislocation.
{"title":"Automatic Diagnosing of Infant Hip Dislocation Based on Neural Network","authors":"Xiang Yu, Dongyun Lin, Weiyao Lan, Bingan Zhong, Ping Lv","doi":"10.1145/3285996.3286021","DOIUrl":"https://doi.org/10.1145/3285996.3286021","url":null,"abstract":"In this paper, we propose an automatic diagnosismethod based on neural network to detect the infant hip joint dislocation from its ultrasonic images. The proposed method consists of two procedures including pre-processing of the infant hip joint ultrasonic images and diagnosing via neural network. Pre-processing focuses on extracting regions of interest from the ultrasound images. Then, the extracted result is fed to the trained neural network. Finally, the output of the neural network divides the infant hip into two categories, that is, dislocation or non-dislocation. Experimental results show that our method reaches an accuracy of 97% in total, 100% in specificity and 86% in sensitivitywhich proves that it is capable of clinical detection of infant hip dislocation.","PeriodicalId":287756,"journal":{"name":"International Symposium on Image Computing and Digital Medicine","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125977666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}