Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721406
J. Tejada, K. Diehl, J. Ivy, James R. Wilson, R. King, Matthew J. Ballan, M. Kay, B. Yankaskas
We develop a simulation modeling framework for evaluating the effectiveness of breast cancer screening policies for US women of age 65+. We introduce a two-phase simulation approach to modeling the main components in the breast cancer screening process. The first phase is a natural-history model of the incidence and progression of untreated breast cancer in randomly sampled individuals from the designated population. Combining discrete event simulation (DES) and system dynamics (SD) submodels, the second phase is a screening-and-treatment model that uses information about the genesis of breast cancer in the sampled individuals as generated by the natural-history model to estimate the benefits of different policies for screening the designated population and treating the affected women. Based on extensive simulation-based comparisons of alternative screening policies, we concluded that annual screening from age 65 to age 80 is the best policy for minimizing breast cancer deaths or for maximizing quality-adjusted life-years saved.
{"title":"Combined DES/SD simulaton model of breast cancer screening for older women: An overview","authors":"J. Tejada, K. Diehl, J. Ivy, James R. Wilson, R. King, Matthew J. Ballan, M. Kay, B. Yankaskas","doi":"10.1109/WSC.2013.6721406","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721406","url":null,"abstract":"We develop a simulation modeling framework for evaluating the effectiveness of breast cancer screening policies for US women of age 65+. We introduce a two-phase simulation approach to modeling the main components in the breast cancer screening process. The first phase is a natural-history model of the incidence and progression of untreated breast cancer in randomly sampled individuals from the designated population. Combining discrete event simulation (DES) and system dynamics (SD) submodels, the second phase is a screening-and-treatment model that uses information about the genesis of breast cancer in the sampled individuals as generated by the natural-history model to estimate the benefits of different policies for screening the designated population and treating the affected women. Based on extensive simulation-based comparisons of alternative screening policies, we concluded that annual screening from age 65 to age 80 is the best policy for minimizing breast cancer deaths or for maximizing quality-adjusted life-years saved.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124247553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721594
Prashant Singh, D. Deschrijver, T. Dhaene
The sequential design methodology for global surrogate modeling of complex systems consists of iteratively training the model on a growing set of samples. Sample selection is a critical step in the process and influences the final quality of the model. It is desirable to use as few samples as possible while building an accurate model using insight gained in previous iterations. A robust sampling scheme is considered that employs Monte Carlo Voronoi tessellations for exploration, linear gradients for exploitation and different schemes are investigated to balance their trade-off. The experimental results on benchmark examples indicate that some schemes can result in a substantially smaller model error especially when the system under consideration has a highly non-linear behavior.
{"title":"A balanced sequential design strategy for global surrogate modeling","authors":"Prashant Singh, D. Deschrijver, T. Dhaene","doi":"10.1109/WSC.2013.6721594","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721594","url":null,"abstract":"The sequential design methodology for global surrogate modeling of complex systems consists of iteratively training the model on a growing set of samples. Sample selection is a critical step in the process and influences the final quality of the model. It is desirable to use as few samples as possible while building an accurate model using insight gained in previous iterations. A robust sampling scheme is considered that employs Monte Carlo Voronoi tessellations for exploration, linear gradients for exploitation and different schemes are investigated to balance their trade-off. The experimental results on benchmark examples indicate that some schemes can result in a substantially smaller model error especially when the system under consideration has a highly non-linear behavior.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123496050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721653
D. Ahner, Carl Parson
We consider the sequential allocation of differing weapons to a collection of adversarial targets with the goal of surviving to destroy a critical target within a combat simulation. The platform which carries the weapons proceeds through a set of sequential stages and at each stage potentially engages targets with available weapons. The decision space at each stage is affected by previous decisions and the probability of platform destruction. Simulation and dynamic programming are then used within a larger dynamic programming framework to determine allocation strategies and develop value functions for these mission sets to be used in future, larger and more complex simulations. A simple dynamic programming example of the problem is considered and used to generate a functional approximation for a more complex system. The developed methodology provides a tractable approach to addressing complex sequential allocation of resources within a risky environment.
{"title":"Weapon tradeoff analysis using dynamic programming for a dynamic weapon target assignment problem within a simulation","authors":"D. Ahner, Carl Parson","doi":"10.1109/WSC.2013.6721653","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721653","url":null,"abstract":"We consider the sequential allocation of differing weapons to a collection of adversarial targets with the goal of surviving to destroy a critical target within a combat simulation. The platform which carries the weapons proceeds through a set of sequential stages and at each stage potentially engages targets with available weapons. The decision space at each stage is affected by previous decisions and the probability of platform destruction. Simulation and dynamic programming are then used within a larger dynamic programming framework to determine allocation strategies and develop value functions for these mission sets to be used in future, larger and more complex simulations. A simple dynamic programming example of the problem is considered and used to generate a functional approximation for a more complex system. The developed methodology provides a tractable approach to addressing complex sequential allocation of resources within a risky environment.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"327 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114197145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721696
M. Rossetti, Mohammad A. Shbool, Vijith Varghese, E. Pohl
This paper investigates the effect of demand aggregation on the performance measures of an inventory system controlled by a (r, Q) policy. Demand usage data is available at different time scales, i.e., daily, weekly, monthly etc., and forecasting is based on these time scales. Using forecasts, appropriate lead time demand models are constructed and used in optimization procedures. The question being investigated is what effect the forecasting time bucket has on whether or not the inventory control model meets planned performance. A simulation model is used to compare performance under different demand aggregation levels. The simulation model of the optimized (r, Q) inventory system is run for the planning horizon and the supply chain operational performance measures like ready rate, expected back order etc., are collected. Subsequently, the effect of aggregating the demand and planning accordingly is analyzed based on the simulated supply chain's operational performance.
{"title":"Investigating the effect of demand aggregation on the performance of an (R, Q) inventory control policy","authors":"M. Rossetti, Mohammad A. Shbool, Vijith Varghese, E. Pohl","doi":"10.1109/WSC.2013.6721696","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721696","url":null,"abstract":"This paper investigates the effect of demand aggregation on the performance measures of an inventory system controlled by a (r, Q) policy. Demand usage data is available at different time scales, i.e., daily, weekly, monthly etc., and forecasting is based on these time scales. Using forecasts, appropriate lead time demand models are constructed and used in optimization procedures. The question being investigated is what effect the forecasting time bucket has on whether or not the inventory control model meets planned performance. A simulation model is used to compare performance under different demand aggregation levels. The simulation model of the optimized (r, Q) inventory system is run for the planning horizon and the supply chain operational performance measures like ready rate, expected back order etc., are collected. Subsequently, the effect of aggregating the demand and planning accordingly is analyzed based on the simulated supply chain's operational performance.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"552 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116516780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721444
S. Rosen, S. Guharay
Metamodeling of large-scale simulations consisting of a large number of input parameters can be very challenging. Neural Networks have shown great promise in fitting these large-scale simulations even without performing factor screening. However, factor screening is an effective method for logically reducing the dimensionality of an input space and thus enabling more feasible metamodel calibration. Applying factor screening methods before calibrating Neural Network metamodels or any metamodel can have both positive and negative effects. The critical assumption for factor screening under investigation involves the prevalence of two-way interactions that contain a variable without a significant main effect by itself. In a simulation with a large parameter space, the prevalence of two-way interactions and their contribution to the total variability in the model output is far from transparent. Important questions therefore arise regarding factor screening and Neural Network metamodels: (a) is this a process worth doing with today's more powerful computing processors, which provide a larger library of runs to do metamodeling; and (b), does erroneously screening these buried interaction terms critically impact the level of metamodel fidelity that one can achieve. In this paper we examine these questions through the construction of a case study on a large-scale simulation. This study projects regional homelessness levels per county of interest based on a large array of budget decisions and resource allocations that expand out to hundreds of input parameters.
{"title":"A case study examining the impact of factor screening for Neural Network metamodels","authors":"S. Rosen, S. Guharay","doi":"10.1109/WSC.2013.6721444","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721444","url":null,"abstract":"Metamodeling of large-scale simulations consisting of a large number of input parameters can be very challenging. Neural Networks have shown great promise in fitting these large-scale simulations even without performing factor screening. However, factor screening is an effective method for logically reducing the dimensionality of an input space and thus enabling more feasible metamodel calibration. Applying factor screening methods before calibrating Neural Network metamodels or any metamodel can have both positive and negative effects. The critical assumption for factor screening under investigation involves the prevalence of two-way interactions that contain a variable without a significant main effect by itself. In a simulation with a large parameter space, the prevalence of two-way interactions and their contribution to the total variability in the model output is far from transparent. Important questions therefore arise regarding factor screening and Neural Network metamodels: (a) is this a process worth doing with today's more powerful computing processors, which provide a larger library of runs to do metamodeling; and (b), does erroneously screening these buried interaction terms critically impact the level of metamodel fidelity that one can achieve. In this paper we examine these questions through the construction of a case study on a large-scale simulation. This study projects regional homelessness levels per county of interest based on a large array of budget decisions and resource allocations that expand out to hundreds of input parameters.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124448092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721478
Weiwei Fan, L. Hong, Xiaowei Zhang
Classical ranking-and-selection (R&S) procedures cannot be applied directly to select the best decision in the presence of distributional ambiguity. In this paper we propose a robust selection-of-the-best (RSB) formulation which compares decisions based on their worst-case performances over a finite set of possible distributions and selects the decision with the best worst-case performance. To solve the RSB problems, we design two-layer R&S procedures, either two-stage or fully sequential, under the indifference-zone formulation. The procedure identifies the worst-case distribution in the first stage and the best decision in the second. We prove the statistical validity of these procedures and test their performances numerically.
{"title":"Robust selection of the best","authors":"Weiwei Fan, L. Hong, Xiaowei Zhang","doi":"10.1109/WSC.2013.6721478","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721478","url":null,"abstract":"Classical ranking-and-selection (R&S) procedures cannot be applied directly to select the best decision in the presence of distributional ambiguity. In this paper we propose a robust selection-of-the-best (RSB) formulation which compares decisions based on their worst-case performances over a finite set of possible distributions and selects the decision with the best worst-case performance. To solve the RSB problems, we design two-layer R&S procedures, either two-stage or fully sequential, under the indifference-zone formulation. The procedure identifies the worst-case distribution in the first stage and the best decision in the second. We prove the statistical validity of these procedures and test their performances numerically.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124079400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721503
S. Mittal, J. L. Risco-Martín
Model-Based Systems Engineering (MBSE) employs model-based technologies and established systems engineering practices. Model-Driven Engineering (MDE) provides various concepts to automate model based practices using metamodeling. We describe the DEVS Unified Process (DUNIP) that aims to bring together MBSE and MDE as Model-driven Systems Engineering (MDSE) and apply it in a netcentric environment. We historically look at various model-based and model-driven flavors and suggest MDSE/DUNIP as one of the derived methodologies. We describe essential elements in DUNIP that facilitate integration with architecture solutions like Service Oriented Architecture (SOA), Event Driven Architectures (EDA), Systems Entity Structures (SES) ontology, and frameworks like Department of Defense Architecture Framework (DoDAF 2.0). We discuss systems requirement specifications, verification and validation, metamodeling, Domain Specific Languages (DSLs), and model transformation technologies as applicable in DUNIP. In this article, we discuss the features and contributions of DUNIP in netcentric system of systems engineering.
{"title":"Model-driven systems engineering for netcentric system of systems with DEVS unified process","authors":"S. Mittal, J. L. Risco-Martín","doi":"10.1109/WSC.2013.6721503","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721503","url":null,"abstract":"Model-Based Systems Engineering (MBSE) employs model-based technologies and established systems engineering practices. Model-Driven Engineering (MDE) provides various concepts to automate model based practices using metamodeling. We describe the DEVS Unified Process (DUNIP) that aims to bring together MBSE and MDE as Model-driven Systems Engineering (MDSE) and apply it in a netcentric environment. We historically look at various model-based and model-driven flavors and suggest MDSE/DUNIP as one of the derived methodologies. We describe essential elements in DUNIP that facilitate integration with architecture solutions like Service Oriented Architecture (SOA), Event Driven Architectures (EDA), Systems Entity Structures (SES) ontology, and frameworks like Department of Defense Architecture Framework (DoDAF 2.0). We discuss systems requirement specifications, verification and validation, metamodeling, Domain Specific Languages (DSLs), and model transformation technologies as applicable in DUNIP. In this article, we discuss the features and contributions of DUNIP in netcentric system of systems engineering.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"15 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126258525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel de Oliveira Mota, N. N. Pereira, R. Botter, A. C. Medina
This paper presents the results of a simulation using physical objects. This concept integrates the physical dimensions of an entity such as length, width, and weight, with the usual process flow paradigm, recurrent in the discrete event simulation models. Based on a naval logistics system, we applied this technique in an access channel of the largest port of Latin America. This system is composed by vessel movement constrained by the access channel dimensions. Vessel length and width dictates whether it is safe or not to have one or two ships simultaneously. The success delivered by the methodology proposed was an accurate validation of the model, approximately 0.45% of deviation, when compared to real data. Additionally, the model supported the design of new terminals operations for Santos, delivering KPIs such as: canal utilization, queue time, berth utilization, and throughput capability.
{"title":"Physical objects on navigation channal simulation models","authors":"Daniel de Oliveira Mota, N. N. Pereira, R. Botter, A. C. Medina","doi":"10.5555/2675983.2675827","DOIUrl":"https://doi.org/10.5555/2675983.2675827","url":null,"abstract":"This paper presents the results of a simulation using physical objects. This concept integrates the physical dimensions of an entity such as length, width, and weight, with the usual process flow paradigm, recurrent in the discrete event simulation models. Based on a naval logistics system, we applied this technique in an access channel of the largest port of Latin America. This system is composed by vessel movement constrained by the access channel dimensions. Vessel length and width dictates whether it is safe or not to have one or two ships simultaneously. The success delivered by the methodology proposed was an accurate validation of the model, approximately 0.45% of deviation, when compared to real data. Additionally, the model supported the design of new terminals operations for Santos, delivering KPIs such as: canal utilization, queue time, berth utilization, and throughput capability.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128188134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721447
X. Chen, K. Kim
Simulation metamodeling has been used as an effective tool in predicting the mean performance of complex systems, reducing the computational burden of costly and time-consuming simulation runs. One of the successful metamodeling techniques developed is the recently proposed stochastic kriging. However, standard stochastic kriging is confined to the case where the sample averages and sample variances of the simulation outputs at design points are the main building blocks for creating a metamodel. In this paper, we show that if each simulation output is further comprised of i.i.d. observations, then it is possible to extend the original framework into a more general one. Such a generalization enables us to utilize estimation methods including sectioning for obtaining point and interval estimates in constructing stochastic kriging metamodels for performance measures such as quantiles and tail conditional expectations. We demonstrate the superior performance of stochastic kriging metamodels under the generalized framework through some examples.
{"title":"Building metamodels for quantile-based measures using sectioning","authors":"X. Chen, K. Kim","doi":"10.1109/WSC.2013.6721447","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721447","url":null,"abstract":"Simulation metamodeling has been used as an effective tool in predicting the mean performance of complex systems, reducing the computational burden of costly and time-consuming simulation runs. One of the successful metamodeling techniques developed is the recently proposed stochastic kriging. However, standard stochastic kriging is confined to the case where the sample averages and sample variances of the simulation outputs at design points are the main building blocks for creating a metamodel. In this paper, we show that if each simulation output is further comprised of i.i.d. observations, then it is possible to extend the original framework into a more general one. Such a generalization enables us to utilize estimation methods including sectioning for obtaining point and interval estimates in constructing stochastic kriging metamodels for performance measures such as quantiles and tail conditional expectations. We demonstrate the superior performance of stochastic kriging metamodels under the generalized framework through some examples.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121466326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721608
A. Ramirez-Nafarrate, J. Gutierrez-Garcia
Child obesity is a public health problem that is of concern of several countries around the world. Long-term effects of child obesity include prevalence of chronic diseases, such as diabetes and heart-related illnesses. This paper presents an agent-based simulation framework to analyze the evolution of obesity in school-age children. In particular, in this paper we evaluate the impact of physical activity on the prevalence of child obesity using an agent-based simulation model. Simulation results suggest that the fraction of overweight and obese children at the end of elementary school can be reduced by doing physical activity with moderate intensity.
{"title":"An agent-based simulation framework to analyze the prevalence of child obesity","authors":"A. Ramirez-Nafarrate, J. Gutierrez-Garcia","doi":"10.1109/WSC.2013.6721608","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721608","url":null,"abstract":"Child obesity is a public health problem that is of concern of several countries around the world. Long-term effects of child obesity include prevalence of chronic diseases, such as diabetes and heart-related illnesses. This paper presents an agent-based simulation framework to analyze the evolution of obesity in school-age children. In particular, in this paper we evaluate the impact of physical activity on the prevalence of child obesity using an agent-based simulation model. Simulation results suggest that the fraction of overweight and obese children at the end of elementary school can be reduced by doing physical activity with moderate intensity.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"471 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130040810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}