This paper discusses the application of the Area Metric to the quantification of modeling errors. The focus of the discussion is the effect of the shape of the two distributions on the result produced by the Area Metric. Two different examples that assume negligible experimental and numerical errors are presented: the first case has experimental and simulated quantities of interest defined by normal distributions that require the definition of a mean value and a standard deviation; the second example is taken from the V&V10.1 ASME Standard. The first example, shows that relatively small differences between the mean values are sufficient for the area metric to be insensitive to the standard deviation. Furthermore, the example of the V&V10.1 ASME Standard produces an Area Metric equal to the difference between the mean values of experiments and simulations. Therefore, the error quantification is reduced to a single number that is obtained from a simple difference of two mean values. This means that the Area Metric fails to reflect a dependence for the difference in the shape of the distributions representing variability. The paper also presents an alternative version of the Area Metric that does not filter the effect of the shape of the distributions by utilizing a reference simulation that has the same mean value of the experiments. This means that the quantification of the modeling error will have contributions from the difference in mean values and from the shape of the distributions.
{"title":"On the Failure of the Area Metric for Validation Exercises of Stochastic Simulations","authors":"L. Eça, K. Dowding, P. Roache","doi":"10.1115/1.4056492","DOIUrl":"https://doi.org/10.1115/1.4056492","url":null,"abstract":"\u0000 This paper discusses the application of the Area Metric to the quantification of modeling errors. The focus of the discussion is the effect of the shape of the two distributions on the result produced by the Area Metric. Two different examples that assume negligible experimental and numerical errors are presented: the first case has experimental and simulated quantities of interest defined by normal distributions that require the definition of a mean value and a standard deviation; the second example is taken from the V&V10.1 ASME Standard.\u0000 The first example, shows that relatively small differences between the mean values are sufficient for the area metric to be insensitive to the standard deviation. Furthermore, the example of the V&V10.1 ASME Standard produces an Area Metric equal to the difference between the mean values of experiments and simulations. Therefore, the error quantification is reduced to a single number that is obtained from a simple difference of two mean values. This means that the Area Metric fails to reflect a dependence for the difference in the shape of the distributions representing variability.\u0000 The paper also presents an alternative version of the Area Metric that does not filter the effect of the shape of the distributions by utilizing a reference simulation that has the same mean value of the experiments. This means that the quantification of the modeling error will have contributions from the difference in mean values and from the shape of the distributions.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47449066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pasquale Cascarano, Andrea Sebastiani, Giorgia Franchini, F. Porta
Deep learning methods have state-of-the-art performances in many image restoration tasks. Their effectiveness is mostly related to the size of the dataset used for the training. Deep Image Prior (DIP) is an energy function framework which eliminates the dependency on the training set, by considering the structure of a neural network as an handcrafted prior offering high impedance to noise and low impedance to signal. In this paper, we analyze and compare the use of different optimization schemes inside the DIP framework for the denoising task.
{"title":"On the First Order Optimization Methods in Deep Image Prior","authors":"Pasquale Cascarano, Andrea Sebastiani, Giorgia Franchini, F. Porta","doi":"10.1115/1.4056470","DOIUrl":"https://doi.org/10.1115/1.4056470","url":null,"abstract":"\u0000 Deep learning methods have state-of-the-art performances in many image restoration tasks. Their effectiveness is mostly related to the size of the dataset used for the training. Deep Image Prior (DIP) is an energy function framework which eliminates the dependency on the training set, by considering the structure of a neural network as an handcrafted prior offering high impedance to noise and low impedance to signal. In this paper, we analyze and compare the use of different optimization schemes inside the DIP framework for the denoising task.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42029360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While current computational capability has led to finite element analysis becoming the predominant means of assessing three-dimensional stress concentrations, there are nonetheless some three-dimensional configurations where the desired level of accuracy of stresses is not realized on the finest mesh used. Here we offer some simple means of improving the accuracy of finite element stresses for such configurations, and doing so with modest increases in computational effort. These improved stresses are obtained by using an adaptation of Richardson extrapolation on original mesh results, and also on mesh results with a reduced mesh refinement factor. Verification of the improvements is undertaken using the convergence checks and error estimates reported earlier. The approach is applied to nine three-dimensional test problems. Finite element analysis of these test problems leads to eleven stresses on the finest meshes used that could benefit from being improved. The extrapolation procedure in conjunction with the reduced refinement factor improved all eleven stresses. Error estimates confirmed these improvements for all eleven.
{"title":"Verifiable Improvements of Finite Element Stresses At Three-dimensional Stress Concentrations","authors":"Jeffrey R. Beisheim, G. Sinclair","doi":"10.1115/1.4056395","DOIUrl":"https://doi.org/10.1115/1.4056395","url":null,"abstract":"\u0000 While current computational capability has led to finite element analysis becoming the predominant means of assessing three-dimensional stress concentrations, there are nonetheless some three-dimensional configurations where the desired level of accuracy of stresses is not realized on the finest mesh used. Here we offer some simple means of improving the accuracy of finite element stresses for such configurations, and doing so with modest increases in computational effort. These improved stresses are obtained by using an adaptation of Richardson extrapolation on original mesh results, and also on mesh results with a reduced mesh refinement factor. Verification of the improvements is undertaken using the convergence checks and error estimates reported earlier. The approach is applied to nine three-dimensional test problems. Finite element analysis of these test problems leads to eleven stresses on the finest meshes used that could benefit from being improved. The extrapolation procedure in conjunction with the reduced refinement factor improved all eleven stresses. Error estimates confirmed these improvements for all eleven.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43363503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A simple procedure for estimating the uncertainty of estimates of true solutions to problems of deflection, stress concentrations, and force resultants in solid and structural mechanics is introduced. Richardson extrapolation is carried out on a dataset of samples from a sequence of four grids. Simple median-based statistical analysis is used to establish 95% confidence intervals. The procedure leads to simple calculations that deliver reasonably tight estimates of the true solution and confidence intervals.
{"title":"Confidence Intervals for Richardson Extrapolation in Solid Mechanics","authors":"P. Krysl","doi":"10.1115/1.4055728","DOIUrl":"https://doi.org/10.1115/1.4055728","url":null,"abstract":"\u0000 A simple procedure for estimating the uncertainty of estimates of true solutions to problems of deflection, stress concentrations, and force resultants in solid and structural mechanics is introduced. Richardson extrapolation is carried out on a dataset of samples from a sequence of four grids. Simple median-based statistical analysis is used to establish 95% confidence intervals. The procedure leads to simple calculations that deliver reasonably tight estimates of the true solution and confidence intervals.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42005852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quantifying the fractional change in a predicted quantity of interest with successive mesh refinement is an attractive and widely used but limited approach to assessing numerical error and uncertainty in physics-based computational modeling. Herein, we introduce the concept of a scalar multiplier αGCI to clarify the connection between fractional change and a more rigorous and accepted estimate of numerical uncertainty, the grid convergence index (GCI). Specifically, we generate lookup tables for αGCI as a function of observed order of accuracy and mesh refinement factor. We then illustrate the limitations of relying on fractional change alone as an acceptance criterion for mesh refinement using a case study involving the radial compression of a Nitinol stent. Results illustrate that numerical uncertainty is often many times larger than the observed fractional change in a mesh pair, especially in the presence of small mesh refinement factors or low orders of accuracy. We strongly caution against relying on fractional change alone as an acceptance criterion for mesh refinement studies, particularly in any high-risk applications requiring absolute prediction of quantities of interest. When computational resources make the systematic refinement required for calculating GCI impractical, submodeling approaches as demonstrated herein can be used to rigorously quantify discretization error at comparatively minimal computational cost. To facilitate future quantitative mesh refinement studies, αGCI lookup tables herein provide a useful tool for guiding the selection of mesh refinement factor and element order.
{"title":"Two Calculation Verification Metrics Used in the Medical Device Industry: Revisiting the Limitations of Fractional Change","authors":"Ismail Guler, K. Aycock, N. Rebelo","doi":"10.1115/1.4055506","DOIUrl":"https://doi.org/10.1115/1.4055506","url":null,"abstract":"\u0000 Quantifying the fractional change in a predicted quantity of interest with successive mesh refinement is an attractive and widely used but limited approach to assessing numerical error and uncertainty in physics-based computational modeling. Herein, we introduce the concept of a scalar multiplier αGCI to clarify the connection between fractional change and a more rigorous and accepted estimate of numerical uncertainty, the grid convergence index (GCI). Specifically, we generate lookup tables for αGCI as a function of observed order of accuracy and mesh refinement factor. We then illustrate the limitations of relying on fractional change alone as an acceptance criterion for mesh refinement using a case study involving the radial compression of a Nitinol stent. Results illustrate that numerical uncertainty is often many times larger than the observed fractional change in a mesh pair, especially in the presence of small mesh refinement factors or low orders of accuracy. We strongly caution against relying on fractional change alone as an acceptance criterion for mesh refinement studies, particularly in any high-risk applications requiring absolute prediction of quantities of interest. When computational resources make the systematic refinement required for calculating GCI impractical, submodeling approaches as demonstrated herein can be used to rigorously quantify discretization error at comparatively minimal computational cost. To facilitate future quantitative mesh refinement studies, αGCI lookup tables herein provide a useful tool for guiding the selection of mesh refinement factor and element order.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46119426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Niloofar Rashidi, R. Burgos, C. Roy, D. Boroyevich
This paper presents the numerical approaches for sensitivity analysis and its application in the modeling effort of a modular multilevel converter (MMC). A review of the state-of-the-art techniques in the sensitivity analysis is provided, with a special focus on the numerical approaches, followed by the sensitivity analysis of an MMC. To further reduce the computational cost per model evaluation in parametric uncertainty quantification (P-UQ) of the MMC, this paper also proposes a simplified model with a minimum number of power modules for P-UQ analysis without introducing any further uncertainties in the modeling and simulation.
{"title":"Sensitivity Analysis and Parametric Uncertainty Quantification of a Modular Multilevel Converter","authors":"Niloofar Rashidi, R. Burgos, C. Roy, D. Boroyevich","doi":"10.1115/1.4055139","DOIUrl":"https://doi.org/10.1115/1.4055139","url":null,"abstract":"\u0000 This paper presents the numerical approaches for sensitivity analysis and its application in the modeling effort of a modular multilevel converter (MMC). A review of the state-of-the-art techniques in the sensitivity analysis is provided, with a special focus on the numerical approaches, followed by the sensitivity analysis of an MMC. To further reduce the computational cost per model evaluation in parametric uncertainty quantification (P-UQ) of the MMC, this paper also proposes a simplified model with a minimum number of power modules for P-UQ analysis without introducing any further uncertainties in the modeling and simulation.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42774412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comment On “Comparison of the V&V10.1 and V&V20 Modeling Error Quantification Procedures for the V&V10.1 Example”","authors":"P. Roache","doi":"10.1115/1.4055105","DOIUrl":"https://doi.org/10.1115/1.4055105","url":null,"abstract":"\u0000 Not Applicable","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43843849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nuclear science and engineering is a field increasingly dominated by computational studies resulting from increasingly powerful computational tools. As a result, analytical studies which previously pioneered nuclear engineering are increasingly viewed as secondary or unnecessary. However, analytical solutions to reduced-fidelity models can provide important information concerning the underlying physics of a problem, and aid in guiding computational studies. Similarly, there is increased interest in sensitivity analysis studies. These studies commonly use computational tools. However, providing a complementary sensitivity study of relevant analytical models can lead to a deeper analysis of a problem. This work provides the analytical sensitivity analysis of the 1D cylindrical monoenergetic neutron diffusion equation using the Forward Sensitivity Analysis Procedure developed by D. Cacuci. Further, these results are applied to a reduced-fidelity model of a spent nuclear fuel cask, demonstrating how computational analysis might be improved with a complementary analytic sensitivity analysis.
{"title":"Analytical Sensitivity Analysis of a Spent Nuclear Fuel Cask","authors":"T. Remedes, S. Ramsey, J. Baciak","doi":"10.1115/1.4055013","DOIUrl":"https://doi.org/10.1115/1.4055013","url":null,"abstract":"\u0000 Nuclear science and engineering is a field increasingly dominated by computational studies resulting from increasingly powerful computational tools. As a result, analytical studies which previously pioneered nuclear engineering are increasingly viewed as secondary or unnecessary. However, analytical solutions to reduced-fidelity models can provide important information concerning the underlying physics of a problem, and aid in guiding computational studies.\u0000 Similarly, there is increased interest in sensitivity analysis studies. These studies commonly use computational tools. However, providing a complementary sensitivity study of relevant analytical models can lead to a deeper analysis of a problem. This work provides the analytical sensitivity analysis of the 1D cylindrical monoenergetic neutron diffusion equation using the Forward Sensitivity Analysis Procedure developed by D. Cacuci. Further, these results are applied to a reduced-fidelity model of a spent nuclear fuel cask, demonstrating how computational analysis might be improved with a complementary analytic sensitivity analysis.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":"1 1","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"63503736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Complex structural systems often entail computationally intensive models that require efficient methods for statistical model calibration due to the high number of required model evaluations. In this paper, we present a BAYESIAN inference-based methodology for efficient statistical model calibration that builds upon the combination of the speed in computation of a low-fidelity model with the accuracy of the computationally intensive high-fidelity model. The proposed two-stage method incorporates the adaptive METROPOLIS algorithm and a GAUSSIAN process (GP)-based adaptive surrogate model as low-fidelity model. In order to account for model uncertainty, we incorporate a GP-based discrepancy function into the model calibration. By calibrating the hyperparameters of the discrepancy function alongside the model parameters, we prevent the results of the model calibration to be biased. The methodology is illustrated by the statistical model calibration of a damping parameter in the modular active spring-damper system, a structural system developed within the collaborative research center SFB 805 at the Technical University of Darmstadt. The reduction of parameter and model uncertainty achieved by application of our methodology is quantified and illustrated by assessing the predictive capability of the mathematical model of the modular active spring-damper system.
{"title":"A Methodology for the Efficient Quantification of Parameter and Model Uncertainty","authors":"R. Feldmann, C. M. Gehb, M. Schäffner, T. Melz","doi":"10.1115/1.4054575","DOIUrl":"https://doi.org/10.1115/1.4054575","url":null,"abstract":"\u0000 Complex structural systems often entail computationally intensive models that require efficient methods for statistical model calibration due to the high number of required model evaluations. In this paper, we present a BAYESIAN inference-based methodology for efficient statistical model calibration that builds upon the combination of the speed in computation of a low-fidelity model with the accuracy of the computationally intensive high-fidelity model. The proposed two-stage method incorporates the adaptive METROPOLIS algorithm and a GAUSSIAN process (GP)-based adaptive surrogate model as low-fidelity model. In order to account for model uncertainty, we incorporate a GP-based discrepancy function into the model calibration. By calibrating the hyperparameters of the discrepancy function alongside the model parameters, we prevent the results of the model calibration to be biased. The methodology is illustrated by the statistical model calibration of a damping parameter in the modular active spring-damper system, a structural system developed within the collaborative research center SFB 805 at the Technical University of Darmstadt. The reduction of parameter and model uncertainty achieved by application of our methodology is quantified and illustrated by assessing the predictive capability of the mathematical model of the modular active spring-damper system.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44161006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Toptan, N. Porter, J. Hales, Wen Jiang, B. Spencer, S. Novascone
Bison is a computational physics code that uses the finite element method to model the thermo-mechanical response of nuclear fuel. Since Bison is used to inform high-consequence decisions, it is important that its computational results are reliable and predictive. One important step in assessing the reliability and predictive capabilities of a simulation tool is the verification process, which quantifies numerical errors in a discrete solution relative to the exact solution of the mathematical model. One step in the verification process–called code verification–ensures that the implemented numerical algorithm is a faithful representation of the underlying mathematical model, including partial differential or integral equations, initial and boundary conditions, and auxiliary relationships. In this paper, the code verification process is applied to spatiotemporal heat conduction problems in Bison. Simultaneous refinement of the discretization in space and time is employed to reveal any potential mistakes in the numerical algorithms for the interactions between the spatial and temporal components of the solution. For each verification problem, the correct spatial and temporal order of accuracy is demonstrated for both first- and second order accurate finite elements and a variety of time integration schemes. These results provide strong evidence that the Bison numerical algorithm for solving spatiotemporal problems reliably represents the underlying mathematical model in MOOSE. The selected test problems can also be used in other simulation tools that numerically solve for conduction or diffusion.
{"title":"Verification of MOOSE/Bison's Heat Conduction Solver Using Combined Spatiotemporal Convergence Analysis","authors":"A. Toptan, N. Porter, J. Hales, Wen Jiang, B. Spencer, S. Novascone","doi":"10.1115/1.4054216","DOIUrl":"https://doi.org/10.1115/1.4054216","url":null,"abstract":"\u0000 Bison is a computational physics code that uses the finite element method to model the thermo-mechanical response of nuclear fuel. Since Bison is used to inform high-consequence decisions, it is important that its computational results are reliable and predictive. One important step in assessing the reliability and predictive capabilities of a simulation tool is the verification process, which quantifies numerical errors in a discrete solution relative to the exact solution of the mathematical model. One step in the verification process–called code verification–ensures that the implemented numerical algorithm is a faithful representation of the underlying mathematical model, including partial differential or integral equations, initial and boundary conditions, and auxiliary relationships. In this paper, the code verification process is applied to spatiotemporal heat conduction problems in Bison. Simultaneous refinement of the discretization in space and time is employed to reveal any potential mistakes in the numerical algorithms for the interactions between the spatial and temporal components of the solution. For each verification problem, the correct spatial and temporal order of accuracy is demonstrated for both first- and second order accurate finite elements and a variety of time integration schemes. These results provide strong evidence that the Bison numerical algorithm for solving spatiotemporal problems reliably represents the underlying mathematical model in MOOSE. The selected test problems can also be used in other simulation tools that numerically solve for conduction or diffusion.","PeriodicalId":52254,"journal":{"name":"Journal of Verification, Validation and Uncertainty Quantification","volume":" ","pages":""},"PeriodicalIF":0.6,"publicationDate":"2022-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46357867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}