Pub Date : 2022-04-19DOI: 10.1109/IPTA54936.2022.9784138
J. Florindo, O. Bruno
Convolutional neural networks have been a funda-mental model in computer vision in the last years. Nevertheless, specifically in the analysis of texture images, the use of that model as a feature extractor rather than trained from scratch or extensively fine tuned has demonstrated to be more effective. In this scenario, such deep features can also benefit from further advanced analysis that can provide more meaningful representation than the direct use of feature maps. A successful example of such procedure is the recent use of visibility graphs to analyze deep features in texture recognition. It has been found that models based on complex networks can quantify properties such as periodicity, randomness and chaoticity. All those features demonstrated usefulness in texture classification. Inspired by this context, here we propose an alternative modeling based on complex networks to leverage the effectiveness of deep texture features. More specifically, we employ recurrence matrices of the neural activation at the penultimate layer. Moreover, the importance of complexity attributes, such as chaoticity and fractality, also instigates us to associate the complex networks with a fractal technique. More precisely, we complement the complex network representation with the application of fractal interpolation over the degree distribution of the recurrence matrix. The final descriptors are employed for texture classification and the results are compared, in terms of accuracy, with classical and state-of-the-art approaches. The achieved results are competitive and pave the way for future analysis on how such complexity measures can be useful in deep learning-based texture recognition.
{"title":"Using fractal interpolation over complex network modeling of deep texture representation","authors":"J. Florindo, O. Bruno","doi":"10.1109/IPTA54936.2022.9784138","DOIUrl":"https://doi.org/10.1109/IPTA54936.2022.9784138","url":null,"abstract":"Convolutional neural networks have been a funda-mental model in computer vision in the last years. Nevertheless, specifically in the analysis of texture images, the use of that model as a feature extractor rather than trained from scratch or extensively fine tuned has demonstrated to be more effective. In this scenario, such deep features can also benefit from further advanced analysis that can provide more meaningful representation than the direct use of feature maps. A successful example of such procedure is the recent use of visibility graphs to analyze deep features in texture recognition. It has been found that models based on complex networks can quantify properties such as periodicity, randomness and chaoticity. All those features demonstrated usefulness in texture classification. Inspired by this context, here we propose an alternative modeling based on complex networks to leverage the effectiveness of deep texture features. More specifically, we employ recurrence matrices of the neural activation at the penultimate layer. Moreover, the importance of complexity attributes, such as chaoticity and fractality, also instigates us to associate the complex networks with a fractal technique. More precisely, we complement the complex network representation with the application of fractal interpolation over the degree distribution of the recurrence matrix. The final descriptors are employed for texture classification and the results are compared, in terms of accuracy, with classical and state-of-the-art approaches. The achieved results are competitive and pave the way for future analysis on how such complexity measures can be useful in deep learning-based texture recognition.","PeriodicalId":381729,"journal":{"name":"2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121882883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1109/IPTA54936.2022.9784142
David Norman Díaz Estrada, Marius Pedersen
Image quality assessment using objective metrics is becoming more widespread, and an impressive number of image quality metrics have been proposed in the literature. An aspect that has received little attention compared to the design of these metrics is pooling. In pooling the quality values, usually from every pixel is reduced to fewer numbers, usually a single value, that represents overall quality. In this paper we investigate the impact of different pooling techniques on the performance of image quality metrics. We have tested different pooling methods with the SSIM and S-CIELAB image quality metrics in the CID:IQ database, and found that the pooling technique has a significant impact on their performance.
{"title":"Impact of Pooling Methods on Image Quality Metrics","authors":"David Norman Díaz Estrada, Marius Pedersen","doi":"10.1109/IPTA54936.2022.9784142","DOIUrl":"https://doi.org/10.1109/IPTA54936.2022.9784142","url":null,"abstract":"Image quality assessment using objective metrics is becoming more widespread, and an impressive number of image quality metrics have been proposed in the literature. An aspect that has received little attention compared to the design of these metrics is pooling. In pooling the quality values, usually from every pixel is reduced to fewer numbers, usually a single value, that represents overall quality. In this paper we investigate the impact of different pooling techniques on the performance of image quality metrics. We have tested different pooling methods with the SSIM and S-CIELAB image quality metrics in the CID:IQ database, and found that the pooling technique has a significant impact on their performance.","PeriodicalId":381729,"journal":{"name":"2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133952756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1109/IPTA54936.2022.9784119
Philipp Gräbel, Julian Thull, M. Crysandt, B. Klinkhammer, P. Boor, T. Brümmendorf, D. Merhof
Automated cell classification in human bone marrow microscopy images could lead to faster acquisition and, therefore, to a considerably larger number of cells for the statistical cell count analysis. As basis for the diagnosis of hematopoietic dis-eases such as leukemia, this would be a significant improvement of clinical workflows. The classification of such cells, however, is challenging, partially due to dependencies between different cell types. In 2021, guided representation learning was introduced as an approach to include this domain knowledge into training by providing “embedding guides” as an optimization target for individual cell types. In this work, we propose improvements to guided repre-sentation learning by automatically generating guides based on graph optimization algorithms. We incorporate information about the visual similarity and the impact on diagnosis of mis-classifications. We show that this reduces critical false predictions and improves the overall classification F-score by up to 2.5 percentage points.
{"title":"Analysis of automatically generated embedding guides for cell classification","authors":"Philipp Gräbel, Julian Thull, M. Crysandt, B. Klinkhammer, P. Boor, T. Brümmendorf, D. Merhof","doi":"10.1109/IPTA54936.2022.9784119","DOIUrl":"https://doi.org/10.1109/IPTA54936.2022.9784119","url":null,"abstract":"Automated cell classification in human bone marrow microscopy images could lead to faster acquisition and, therefore, to a considerably larger number of cells for the statistical cell count analysis. As basis for the diagnosis of hematopoietic dis-eases such as leukemia, this would be a significant improvement of clinical workflows. The classification of such cells, however, is challenging, partially due to dependencies between different cell types. In 2021, guided representation learning was introduced as an approach to include this domain knowledge into training by providing “embedding guides” as an optimization target for individual cell types. In this work, we propose improvements to guided repre-sentation learning by automatically generating guides based on graph optimization algorithms. We incorporate information about the visual similarity and the impact on diagnosis of mis-classifications. We show that this reduces critical false predictions and improves the overall classification F-score by up to 2.5 percentage points.","PeriodicalId":381729,"journal":{"name":"2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"236 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115827854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1109/IPTA54936.2022.9784127
Rania Trigui, M. Adel, M. D. Bisceglie, J. Wojak, Jessica Pinol, Alice Faure, K. Chaumoitre
In order to characterize the bladder state and functioning, it is necessary to succeed the segmentation of its wall in MR images. In this context, we propose a computer-aided diagnosis system based on segmentation and classification applied to the Bladder Wall (BW), as a part of spina bifida disease study. The proposed system starts with the BW extraction using an improved levelSet based algorithm. Then an optimized classification is proposed using some selected features. Obtained results proves the efficiency of the proposed system, which can be significantly helpful for radiologist avoiding the fastidious manual segmentation and providing a precise idea about the spina bifida severity
{"title":"Comparison of GWO-SVM and Random Forest Classifiers in a LevelSet based approach for Bladder wall segmentation and characterisation using MR images","authors":"Rania Trigui, M. Adel, M. D. Bisceglie, J. Wojak, Jessica Pinol, Alice Faure, K. Chaumoitre","doi":"10.1109/IPTA54936.2022.9784127","DOIUrl":"https://doi.org/10.1109/IPTA54936.2022.9784127","url":null,"abstract":"In order to characterize the bladder state and functioning, it is necessary to succeed the segmentation of its wall in MR images. In this context, we propose a computer-aided diagnosis system based on segmentation and classification applied to the Bladder Wall (BW), as a part of spina bifida disease study. The proposed system starts with the BW extraction using an improved levelSet based algorithm. Then an optimized classification is proposed using some selected features. Obtained results proves the efficiency of the proposed system, which can be significantly helpful for radiologist avoiding the fastidious manual segmentation and providing a precise idea about the spina bifida severity","PeriodicalId":381729,"journal":{"name":"2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123525734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1109/IPTA54936.2022.9784144
Alexander Schmidt, Prathmesh Madhu, A. Maier, V. Christlein, Ronak Kosti
Image enhancement algorithms are very useful for real world computer vision tasks where image resolution is often physically limited by the sensor size. While state-of-the-art deep neural networks show impressive results for image enhancement, they often struggle to enhance real-world images. In this work, we tackle a real-world setting: inpainting of images from Dunhuang caves. The Dunhuang dataset consists of murals, half of which suffer from corrosion and aging. These murals feature a range of rich content, such as Buddha statues, bodhisattvas, sponsors, architecture, dance, music, and decorative patterns designed by different artists spanning ten centuries, which makes manual restoration challenging. We modify two different existing methods (CAR, HINet) that are based upon state-of-the-art (SOTA) super resolution and deblurring networks. We show that those can successfully inpaint and enhance these deteriorated cave paintings. We further show that a novel combination of CAR and HINet, resulting in our proposed inpainting network (ARIN), is very robust to external noise, especially Gaussian noise. To this end, we present a quantitative and qualitative comparison of our proposed approach with existing SOTA networks and winners of the Dunhuang challenge. One of the proposed methods (HINet) represents the new state of the art and outperforms the 1st place of the Dunhuang Challenge, while our combination ARIN, which is robust to noise, is comparable to the 1st place. We also present and discuss qualitative results showing the impact of our method for inpainting on Dunhuang cave images.
{"title":"ARIN: Adaptive Resampling and Instance Normalization for Robust Blind Inpainting of Dunhuang Cave Paintings","authors":"Alexander Schmidt, Prathmesh Madhu, A. Maier, V. Christlein, Ronak Kosti","doi":"10.1109/IPTA54936.2022.9784144","DOIUrl":"https://doi.org/10.1109/IPTA54936.2022.9784144","url":null,"abstract":"Image enhancement algorithms are very useful for real world computer vision tasks where image resolution is often physically limited by the sensor size. While state-of-the-art deep neural networks show impressive results for image enhancement, they often struggle to enhance real-world images. In this work, we tackle a real-world setting: inpainting of images from Dunhuang caves. The Dunhuang dataset consists of murals, half of which suffer from corrosion and aging. These murals feature a range of rich content, such as Buddha statues, bodhisattvas, sponsors, architecture, dance, music, and decorative patterns designed by different artists spanning ten centuries, which makes manual restoration challenging. We modify two different existing methods (CAR, HINet) that are based upon state-of-the-art (SOTA) super resolution and deblurring networks. We show that those can successfully inpaint and enhance these deteriorated cave paintings. We further show that a novel combination of CAR and HINet, resulting in our proposed inpainting network (ARIN), is very robust to external noise, especially Gaussian noise. To this end, we present a quantitative and qualitative comparison of our proposed approach with existing SOTA networks and winners of the Dunhuang challenge. One of the proposed methods (HINet) represents the new state of the art and outperforms the 1st place of the Dunhuang Challenge, while our combination ARIN, which is robust to noise, is comparable to the 1st place. We also present and discuss qualitative results showing the impact of our method for inpainting on Dunhuang cave images.","PeriodicalId":381729,"journal":{"name":"2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115528706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1109/IPTA54936.2022.9784149
C. B. Martin, Camille Simon Chane, C. Clouchoux, A. Histace
Brain organoids are three-dimensional tissues gener-ated in vitro from pluripotent stem cells and replicating the early development of Human brain. To implement, test and compare methods to follow their growth on microscopic images, a large dataset not always available is required with a trusted ground truth when developing automated Machine Learning solutions. Recently, optimized Generative Adversarial Networks prove to generate only a similar object content but not a background specific to the real acquisition modality. In this work, a small database of brain organoid bright field images, characterized by a shot noise background, is extended using the already validated AAEGAN architecture, and specific noise or a mixture noise injected in the generator. We hypothesize this noise injection could help to generate an homogeneous and similar bright-field background. To validate or invalidate our generated images we use metric calculation, and a dimensional reduction on features on original and generated images. Our result suggest that noise injection can modulate the generated image backgrounds in order to produce a more similar content as produced in the microscopic reality. A validation of these images by biological experts could augment the original dataset and allow their analysis by Deep-based solutions.
{"title":"AAEGAN Optimization by Purposeful Noise Injection for the Generation of Bright-Field Brain Organoid Images","authors":"C. B. Martin, Camille Simon Chane, C. Clouchoux, A. Histace","doi":"10.1109/IPTA54936.2022.9784149","DOIUrl":"https://doi.org/10.1109/IPTA54936.2022.9784149","url":null,"abstract":"Brain organoids are three-dimensional tissues gener-ated in vitro from pluripotent stem cells and replicating the early development of Human brain. To implement, test and compare methods to follow their growth on microscopic images, a large dataset not always available is required with a trusted ground truth when developing automated Machine Learning solutions. Recently, optimized Generative Adversarial Networks prove to generate only a similar object content but not a background specific to the real acquisition modality. In this work, a small database of brain organoid bright field images, characterized by a shot noise background, is extended using the already validated AAEGAN architecture, and specific noise or a mixture noise injected in the generator. We hypothesize this noise injection could help to generate an homogeneous and similar bright-field background. To validate or invalidate our generated images we use metric calculation, and a dimensional reduction on features on original and generated images. Our result suggest that noise injection can modulate the generated image backgrounds in order to produce a more similar content as produced in the microscopic reality. A validation of these images by biological experts could augment the original dataset and allow their analysis by Deep-based solutions.","PeriodicalId":381729,"journal":{"name":"2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114394383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1109/IPTA54936.2022.9784141
Prathmesh Madhu, Anna Meyer, Mathias Zinnen, Lara Mührenberg, Dirk Suckow, Torsten Bendschus, Corinna Reinhardt, Peter Bell, Ute Verstegen, Ronak Kosti, A. Maier, V. Christlein
Christian archeologists face many challenges in understanding visual narration through artwork images. This understanding is essential to access underlying semantic in-formation. Therefore, narrative elements (objects) need to be labeled, compared, and contextualized by experts, which takes an enormous amount of time and effort. Our work aims to reduce labeling costs by using one-shot object detection to generate a labeled database from unannotated images. Novel object categories can be defined broadly and annotated using visual examples of narrative elements without training exclusively for such objects. In this work, we propose two ways of using contextual information as data augmentation to improve the detection performance. Furthermore, we introduce a multi-relation detector to our framework, which extracts global, local, and patch-based relations of the image. Additionally, we evaluate the use of contrastive learning. We use data from Christian archeology (CHA) and art history - IconArt-v2 (IA). Our context encoding approach improves the typical fine-tuning approach in terms of mean average precision (mAP) by about 3.5 % (4 %) at 0.25 intersection over union (IoU) for UnSeen categories, and 6 % (1.5 %) for Seen categories in CHA (IA). To the best of our knowledge, our work is the first to explore few shot object detection on heterogeneous artistic data by investigating evaluation methods and data augmentation strategies. We will release the code and models after acceptance of the work.
{"title":"One-Shot Object Detection in Heterogeneous Artwork Datasets","authors":"Prathmesh Madhu, Anna Meyer, Mathias Zinnen, Lara Mührenberg, Dirk Suckow, Torsten Bendschus, Corinna Reinhardt, Peter Bell, Ute Verstegen, Ronak Kosti, A. Maier, V. Christlein","doi":"10.1109/IPTA54936.2022.9784141","DOIUrl":"https://doi.org/10.1109/IPTA54936.2022.9784141","url":null,"abstract":"Christian archeologists face many challenges in understanding visual narration through artwork images. This understanding is essential to access underlying semantic in-formation. Therefore, narrative elements (objects) need to be labeled, compared, and contextualized by experts, which takes an enormous amount of time and effort. Our work aims to reduce labeling costs by using one-shot object detection to generate a labeled database from unannotated images. Novel object categories can be defined broadly and annotated using visual examples of narrative elements without training exclusively for such objects. In this work, we propose two ways of using contextual information as data augmentation to improve the detection performance. Furthermore, we introduce a multi-relation detector to our framework, which extracts global, local, and patch-based relations of the image. Additionally, we evaluate the use of contrastive learning. We use data from Christian archeology (CHA) and art history - IconArt-v2 (IA). Our context encoding approach improves the typical fine-tuning approach in terms of mean average precision (mAP) by about 3.5 % (4 %) at 0.25 intersection over union (IoU) for UnSeen categories, and 6 % (1.5 %) for Seen categories in CHA (IA). To the best of our knowledge, our work is the first to explore few shot object detection on heterogeneous artistic data by investigating evaluation methods and data augmentation strategies. We will release the code and models after acceptance of the work.","PeriodicalId":381729,"journal":{"name":"2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"254 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133611218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1109/ipta54936.2022.9784115
{"title":"Copyright","authors":"","doi":"10.1109/ipta54936.2022.9784115","DOIUrl":"https://doi.org/10.1109/ipta54936.2022.9784115","url":null,"abstract":"","PeriodicalId":381729,"journal":{"name":"2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127772598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-19DOI: 10.1109/IPTA54936.2022.9784125
Laura Bertojo, W. Puech
Protecting sensitive images is nowadays a key issue in information security domain. As such, numerous techniques have emerged to securely transmit or store such multimedia data, as encryption, steganography or secret sharing. Most of today's secret image sharing methods relie on the polynomial-based scheme proposed by Shamir. However, some of the shared images distributed to the participants may be noised between their creation and their use to retrieve the secret image. Noise can be added to a shared image during transmission, storage or JPEG compression for example. However, to our knowledge, to date no analysis has been made on the impact of using a noised shared image in the reconstruction process of a secret image. In this paper, we propose a method to correct the errors during the reconstruction of a secret image using noised shared images.
{"title":"Correction of Secret Images Reconstructed from Noised Shared Images","authors":"Laura Bertojo, W. Puech","doi":"10.1109/IPTA54936.2022.9784125","DOIUrl":"https://doi.org/10.1109/IPTA54936.2022.9784125","url":null,"abstract":"Protecting sensitive images is nowadays a key issue in information security domain. As such, numerous techniques have emerged to securely transmit or store such multimedia data, as encryption, steganography or secret sharing. Most of today's secret image sharing methods relie on the polynomial-based scheme proposed by Shamir. However, some of the shared images distributed to the participants may be noised between their creation and their use to retrieve the secret image. Noise can be added to a shared image during transmission, storage or JPEG compression for example. However, to our knowledge, to date no analysis has been made on the impact of using a noised shared image in the reconstruction process of a secret image. In this paper, we propose a method to correct the errors during the reconstruction of a secret image using noised shared images.","PeriodicalId":381729,"journal":{"name":"2022 Eleventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116746179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}