Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950700
A. Larroza, M. P. López-Lereu, J. Monmeneu, V. Bodí, D. Moratal
Detection of infarcted myocardium in the left ventricle is achieved with delayed enhancement magnetic resonance imaging (DE-MRI). However, manual segmentation is tedious and prone to variability. We studied three texture analysis methods (run-length matrix, co-occurrence matrix, and autoregressive model) in combination with histogram features to characterize the infarcted myocardium. We evaluated 10 patients with chronic infarction to select the most discriminative features and to train a support vector machine (SVM) classifier. The classifier model was then used to segment five human hearts from the STACOM DE-MRI challenge at MICCAI 2012. The Dice coefficient was used to compare the segmentation results with the ground truth available in the STACOM dataset. Segmentation using texture features provided good results with an overall Dice coefficient of 0.71 ± 0.12 (mean ± standard deviation).
{"title":"Texture analysis for infarcted myocardium detection on delayed enhancement MRI","authors":"A. Larroza, M. P. López-Lereu, J. Monmeneu, V. Bodí, D. Moratal","doi":"10.1109/ISBI.2017.7950700","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950700","url":null,"abstract":"Detection of infarcted myocardium in the left ventricle is achieved with delayed enhancement magnetic resonance imaging (DE-MRI). However, manual segmentation is tedious and prone to variability. We studied three texture analysis methods (run-length matrix, co-occurrence matrix, and autoregressive model) in combination with histogram features to characterize the infarcted myocardium. We evaluated 10 patients with chronic infarction to select the most discriminative features and to train a support vector machine (SVM) classifier. The classifier model was then used to segment five human hearts from the STACOM DE-MRI challenge at MICCAI 2012. The Dice coefficient was used to compare the segmentation results with the ground truth available in the STACOM dataset. Segmentation using texture features provided good results with an overall Dice coefficient of 0.71 ± 0.12 (mean ± standard deviation).","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"42 1","pages":"1066-1069"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85462791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950601
S. V. D. Voort, R. Gahrmann, M. Bent, A. Vincent, W. Niessen, M. Smits, S. Klein
1p/19q co-deletion is an important prognostic factor in low grade gliomas. However, determination of the 1p/19q status currently requires a biopsy. To overcome this, we investigate a radiogenomic classification using support vector machines to non-invasively predict the 1p/19q status from multimodal MRI data. Different approaches of predicting this status were compared: a direct approach which predicts the 1p/19q co-deletion status and an indirect approach which predicts the mutation status of 1p and 19q individually and combines these predictions to predict the 1p/19q co-deletion status. Using the indirect approach based on both the T1-weighted and T2-weighted images delivered the best result and resulted in a 95% confidence interval for the sensitivity and specificity of [0.44; 0.89] and [0.70; 1.00] respectively.
{"title":"Radiogenomic classification of the 1p/19q status in presumed low-grade gliomas","authors":"S. V. D. Voort, R. Gahrmann, M. Bent, A. Vincent, W. Niessen, M. Smits, S. Klein","doi":"10.1109/ISBI.2017.7950601","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950601","url":null,"abstract":"1p/19q co-deletion is an important prognostic factor in low grade gliomas. However, determination of the 1p/19q status currently requires a biopsy. To overcome this, we investigate a radiogenomic classification using support vector machines to non-invasively predict the 1p/19q status from multimodal MRI data. Different approaches of predicting this status were compared: a direct approach which predicts the 1p/19q co-deletion status and an indirect approach which predicts the mutation status of 1p and 19q individually and combines these predictions to predict the 1p/19q co-deletion status. Using the indirect approach based on both the T1-weighted and T2-weighted images delivered the best result and resulted in a 95% confidence interval for the sensitivity and specificity of [0.44; 0.89] and [0.70; 1.00] respectively.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"159 1","pages":"638-641"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80614396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950604
F. Yellin, B. Haeffele, R. Vidal
We propose a convolutional sparse dictionary learning and coding approach for detecting and counting instances of a repeated object in a holographic lens-free image. The proposed approach exploits the fact that an image containing a single object instance can be approximated as the convolution of a (small) object template with a spike at the location of the object instance. Therefore, an image containing multiple non-overlapping instances of an object can be approximated as the sum of convolutions of templates with spikes. Given one or more images, one can learn a dictionary of templates using a convolutional extension of the K-SVD algorithm for sparse dictionary learning. Given a set of templates, one can efficiently detect object instances in a new image using a convolutional extension of the matching pursuit algorithm for sparse coding. Experiments on red blood cell (RBC) and white blood cell (WBC) detection and counting demonstrate that the proposed method produces promising results without requiring additional post-processing.
{"title":"Blood cell detection and counting in holographic lens-free imaging by convolutional sparse dictionary learning and coding","authors":"F. Yellin, B. Haeffele, R. Vidal","doi":"10.1109/ISBI.2017.7950604","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950604","url":null,"abstract":"We propose a convolutional sparse dictionary learning and coding approach for detecting and counting instances of a repeated object in a holographic lens-free image. The proposed approach exploits the fact that an image containing a single object instance can be approximated as the convolution of a (small) object template with a spike at the location of the object instance. Therefore, an image containing multiple non-overlapping instances of an object can be approximated as the sum of convolutions of templates with spikes. Given one or more images, one can learn a dictionary of templates using a convolutional extension of the K-SVD algorithm for sparse dictionary learning. Given a set of templates, one can efficiently detect object instances in a new image using a convolutional extension of the matching pursuit algorithm for sparse coding. Experiments on red blood cell (RBC) and white blood cell (WBC) detection and counting demonstrate that the proposed method produces promising results without requiring additional post-processing.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"20 1","pages":"650-653"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81901264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950460
S. Gazagnes, Emmanuel Soubies, L. Blanc-Féraud
Single molecule localization microscopy has made great improvements in spatial resolution achieving performance beyond the diffraction limit by sequentially activating and imaging small subsets of molecules. Here, we present an algorithm designed for high-density molecule localization which is of a major importance in order to improve the temporal resolution of such microscopy techniques. We formulate the localization problem as a sparse approximation problem which is then relaxed using the recently proposed CEL0 penalty, allowing an optimization through recent nonsmooth nonconvex algorithms. Finally, performances of the proposed method are compared with one of the best current method for high-density molecules localization on simulated and real data.
{"title":"High density molecule localization for super-resolution microscopy using CEL0 based sparse approximation","authors":"S. Gazagnes, Emmanuel Soubies, L. Blanc-Féraud","doi":"10.1109/ISBI.2017.7950460","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950460","url":null,"abstract":"Single molecule localization microscopy has made great improvements in spatial resolution achieving performance beyond the diffraction limit by sequentially activating and imaging small subsets of molecules. Here, we present an algorithm designed for high-density molecule localization which is of a major importance in order to improve the temporal resolution of such microscopy techniques. We formulate the localization problem as a sparse approximation problem which is then relaxed using the recently proposed CEL0 penalty, allowing an optimization through recent nonsmooth nonconvex algorithms. Finally, performances of the proposed method are compared with one of the best current method for high-density molecules localization on simulated and real data.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"15 1","pages":"28-31"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74371570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950597
Sheng Wang, Ashwin Raju, Junzhou Huang
Automatic recognition of surgical workflow is an unresolved problem among the community of computer-assisted interventions. Among all the features used for surgical workflow recognition, one important feature is the presence of the surgical tools. Extracting this feature leads to the surgical tool presence detection problem to detect what tools are used at each time in surgery. This paper proposes a deep learning based multi-label classification method for surgical tool presence detection in laparoscopic videos. The proposed method combines two state-of-the-art deep neural networks and uses ensemble learning to solve the tool presence detection problem as a multi-label classification problem. The performance of the proposed method has been evaluated in the surgical tool presence detection challenge held by Modeling and Monitoring of Computer Assisted Interventions workshop. The proposed method shows superior performance compared to other methods and has won the first place of the challenge.
{"title":"Deep learning based multi-label classification for surgical tool presence detection in laparoscopic videos","authors":"Sheng Wang, Ashwin Raju, Junzhou Huang","doi":"10.1109/ISBI.2017.7950597","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950597","url":null,"abstract":"Automatic recognition of surgical workflow is an unresolved problem among the community of computer-assisted interventions. Among all the features used for surgical workflow recognition, one important feature is the presence of the surgical tools. Extracting this feature leads to the surgical tool presence detection problem to detect what tools are used at each time in surgery. This paper proposes a deep learning based multi-label classification method for surgical tool presence detection in laparoscopic videos. The proposed method combines two state-of-the-art deep neural networks and uses ensemble learning to solve the tool presence detection problem as a multi-label classification problem. The performance of the proposed method has been evaluated in the surgical tool presence detection challenge held by Modeling and Monitoring of Computer Assisted Interventions workshop. The proposed method shows superior performance compared to other methods and has won the first place of the challenge.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"12 1","pages":"620-623"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75260010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950513
J. Robic, B. Perret, A. Nkengne, M. Couprie, Hugues Talbot
Reflectance confocal microscopy (RCM) is a powerful tool to visualize the skin layers at cellular resolution. The dermal-epidermal junction (DEJ) is a thin complex 3D structure. It appears as a low-contrasted structure in confocal en-face sections, which is difficult to recognize visually, leading to uncertainty in the classification. In this article, we propose an automated method for segmenting the DEJ with reduced uncertainty. The proposed approach relies on a 3D Conditional Random Field to model the skin biological properties and impose regularization constraints. We improve the restitution of the epidermal and dermal labels while reducing the thickness of the uncertainty area in a coherent biological way from 16.9 µm (ground-truth) to 10.3 µm.
{"title":"Classification of the dermal-epidermal junction using in-vivo confocal microscopy","authors":"J. Robic, B. Perret, A. Nkengne, M. Couprie, Hugues Talbot","doi":"10.1109/ISBI.2017.7950513","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950513","url":null,"abstract":"Reflectance confocal microscopy (RCM) is a powerful tool to visualize the skin layers at cellular resolution. The dermal-epidermal junction (DEJ) is a thin complex 3D structure. It appears as a low-contrasted structure in confocal en-face sections, which is difficult to recognize visually, leading to uncertainty in the classification. In this article, we propose an automated method for segmenting the DEJ with reduced uncertainty. The proposed approach relies on a 3D Conditional Random Field to model the skin biological properties and impose regularization constraints. We improve the restitution of the epidermal and dermal labels while reducing the thickness of the uncertainty area in a coherent biological way from 16.9 µm (ground-truth) to 10.3 µm.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"10 1","pages":"252-255"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81844168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950729
G. Ralli, D. McGowan, M. Chappell, Ricky A. Sharma, G. Higgins, J. Fenwick
4D-PET reconstruction has the potential to significantly increase the signal-to-noise ratio in dynamic PET by fitting smooth temporal functions during the reconstruction. However, the optimal choice of temporal function remains an open question. A 4D-PET reconstruction algorithm using adaptive-knot cubic B-splines is proposed. Using realistic Monte-Carlo simulated data from a digital patient phantom representing an [18-F]-FMISO-PET scan of a non-small cell lung cancer patient, this method was compared to a spectral model based 4D-PET reconstruction and the conventional MLEM and MAP algorithms. Within the entire patient region the proposed algorithm produced the best bias-noise trade-off, while within the tumor region the spline- and spectral model-based reconstructions gave comparable results.
{"title":"4D-PET reconstruction of dynamic non-small cell lung cancer [18-F]-FMISO-PET data using adaptive-knot cubic B-splines","authors":"G. Ralli, D. McGowan, M. Chappell, Ricky A. Sharma, G. Higgins, J. Fenwick","doi":"10.1109/ISBI.2017.7950729","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950729","url":null,"abstract":"4D-PET reconstruction has the potential to significantly increase the signal-to-noise ratio in dynamic PET by fitting smooth temporal functions during the reconstruction. However, the optimal choice of temporal function remains an open question. A 4D-PET reconstruction algorithm using adaptive-knot cubic B-splines is proposed. Using realistic Monte-Carlo simulated data from a digital patient phantom representing an [18-F]-FMISO-PET scan of a non-small cell lung cancer patient, this method was compared to a spectral model based 4D-PET reconstruction and the conventional MLEM and MAP algorithms. Within the entire patient region the proposed algorithm produced the best bias-noise trade-off, while within the tumor region the spline- and spectral model-based reconstructions gave comparable results.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"62 1","pages":"1189-1192"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89953493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-18DOI: 10.1109/ISBI.2017.7950698
A. Sekuboyina, S. T. Devarakonda, C. Seelamantula
In wireless capsule endoscopy (WCE), a swallowable miniature optical endoscope is used to transmit color images of the gastrointestinal tract. However, the number of images transmitted is large, taking a significant amount of the medical expert's time to review the scan. In this paper, we propose a technique to automate the abnormality detection in WCE images. We split the image into several patches and extract features pertaining to each block using a convolutional neural network (CNN) to increase their generality while overcoming the drawbacks of manually crafted features. We intend to exploit the importance of color information for the task. Experiments are performed to determine the optimal color space components for feature extraction and classifier design. We obtained an area under receiver-operating-characteristic (ROC) curve of approximately 0.8 on a dataset containing multiple abnormalities.
{"title":"A convolutional neural network approach for abnormality detection in Wireless Capsule Endoscopy","authors":"A. Sekuboyina, S. T. Devarakonda, C. Seelamantula","doi":"10.1109/ISBI.2017.7950698","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950698","url":null,"abstract":"In wireless capsule endoscopy (WCE), a swallowable miniature optical endoscope is used to transmit color images of the gastrointestinal tract. However, the number of images transmitted is large, taking a significant amount of the medical expert's time to review the scan. In this paper, we propose a technique to automate the abnormality detection in WCE images. We split the image into several patches and extract features pertaining to each block using a convolutional neural network (CNN) to increase their generality while overcoming the drawbacks of manually crafted features. We intend to exploit the importance of color information for the task. Experiments are performed to determine the optimal color space components for feature extraction and classifier design. We obtained an area under receiver-operating-characteristic (ROC) curve of approximately 0.8 on a dataset containing multiple abnormalities.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"1 1","pages":"1057-1060"},"PeriodicalIF":0.0,"publicationDate":"2017-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82933650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.1109/ISBI.2017.7950509
Xiaochen Zhang, J. Wan
Streaking artifacts caused by metallic objects severely affect the visual quality of CT images, resulting in medical misdiagnosis. Commonly used approaches for metal artifact reduction usually consist of interpolation and iterative methods. The former one tends to lose image quality by introducing extra artifacts, while the latter is more computational expensive. This paper proposes a new approach based on the Euler's elastica inpainting technique, which can preserve sharp edges and curvature when reconstructing the sinogram image, resulting in better quality in the restored CT image. Results of quantitative and qualitative experiments on both simulated phantoms and clinical CT images demonstrate that our method can suppress metal artifacts significantly.
{"title":"Image restoration of medical images with streaking artifacts by Euler's elastica inpainting","authors":"Xiaochen Zhang, J. Wan","doi":"10.1109/ISBI.2017.7950509","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950509","url":null,"abstract":"Streaking artifacts caused by metallic objects severely affect the visual quality of CT images, resulting in medical misdiagnosis. Commonly used approaches for metal artifact reduction usually consist of interpolation and iterative methods. The former one tends to lose image quality by introducing extra artifacts, while the latter is more computational expensive. This paper proposes a new approach based on the Euler's elastica inpainting technique, which can preserve sharp edges and curvature when reconstructing the sinogram image, resulting in better quality in the restored CT image. Results of quantitative and qualitative experiments on both simulated phantoms and clinical CT images demonstrate that our method can suppress metal artifacts significantly.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"62 1","pages":"235-239"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74125247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-04-01DOI: 10.1109/ISBI.2017.7950505
D. Schmitter, M. Unser
We propose a new formulation of the active surface model in 3D. Instead of aligning a shape dictionary through the similarity transform, we consider more flexible affine transformations and introduce an alignment method that is unbiased in the sense that it implicitly constructs a common reference shape. Our formulation is expressed in the continuous domain and we provide an algorithm to exactly implement the framework using spline-based parametric surfaces. We test our model on real 3D MRI data. A comparison with the classical active shape model shows that our method allows us to capture shape variability in a dictionary in a more precise way.
{"title":"Closed-form alignment of active surface models using splines","authors":"D. Schmitter, M. Unser","doi":"10.1109/ISBI.2017.7950505","DOIUrl":"https://doi.org/10.1109/ISBI.2017.7950505","url":null,"abstract":"We propose a new formulation of the active surface model in 3D. Instead of aligning a shape dictionary through the similarity transform, we consider more flexible affine transformations and introduce an alignment method that is unbiased in the sense that it implicitly constructs a common reference shape. Our formulation is expressed in the continuous domain and we provide an algorithm to exactly implement the framework using spline-based parametric surfaces. We test our model on real 3D MRI data. A comparison with the classical active shape model shows that our method allows us to capture shape variability in a dictionary in a more precise way.","PeriodicalId":6547,"journal":{"name":"2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017)","volume":"42 1","pages":"219-222"},"PeriodicalIF":0.0,"publicationDate":"2017-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75724074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}