Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721502
Guilherme Steinmann, Paulo José de Freitas Filho
The call center industry has expanded greatly over recent years and it is constantly striving to increase business efficiency and customer service effectiveness. Incoming call volume forecasting algorithms are used in inbound call centers to predict the demand for services and, as a result, to plan resource allocation. However, a number of phenomena can have an impact on incoming call volumes, meaning that classical forecasting algorithms will produce less than satisfactory results. When evaluating the performance of a forecasting algorithm, acquiring the data needed for research is not always straightforward. This article shows how simulation can be of use to generate data that can be used to evaluate incoming call forecasting algorithms.
{"title":"Using simulation to evaluate call forecasting algorithms for inbound call center","authors":"Guilherme Steinmann, Paulo José de Freitas Filho","doi":"10.1109/WSC.2013.6721502","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721502","url":null,"abstract":"The call center industry has expanded greatly over recent years and it is constantly striving to increase business efficiency and customer service effectiveness. Incoming call volume forecasting algorithms are used in inbound call centers to predict the demand for services and, as a result, to plan resource allocation. However, a number of phenomena can have an impact on incoming call volumes, meaning that classical forecasting algorithms will produce less than satisfactory results. When evaluating the performance of a forecasting algorithm, acquiring the data needed for research is not always straightforward. This article shows how simulation can be of use to generate data that can be used to evaluate incoming call forecasting algorithms.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126107127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721505
Vincent Béchard, Normand Cote
Modeling industrial systems involving discrete and continuous processes is a challenge for practitioners. A simulation approach to handle these situations is based on flow rate discretization (instead of mass discretization): the discrete simulation unfolds as a series of steady-state flows calculation updated when a state variable changes or a random event occurs. Underlying mass balancing problem can be solved with the linear programming simplex algorithm. This paper presents a novel technique based on maximizing flow through a network where nodes are black-box model units. This network-based method is less sensitive to problem size; the computation effort required to solve the mass balance is proportional to O(m+n) instead of O(mn) with linear programming. The approach was implemented in FlexsimTM software and used to simulate an iron ore port terminal. Processes included in the model were: mine-to-port trains handling, port terminal equipment (processing rate, capacity, operating logic, failures) and ship loading.
{"title":"Simulation of mixed discrete and continuous systems: An iron ore terminal example","authors":"Vincent Béchard, Normand Cote","doi":"10.1109/WSC.2013.6721505","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721505","url":null,"abstract":"Modeling industrial systems involving discrete and continuous processes is a challenge for practitioners. A simulation approach to handle these situations is based on flow rate discretization (instead of mass discretization): the discrete simulation unfolds as a series of steady-state flows calculation updated when a state variable changes or a random event occurs. Underlying mass balancing problem can be solved with the linear programming simplex algorithm. This paper presents a novel technique based on maximizing flow through a network where nodes are black-box model units. This network-based method is less sensitive to problem size; the computation effort required to solve the mass balance is proportional to O(m+n) instead of O(mn) with linear programming. The approach was implemented in FlexsimTM software and used to simulate an iron ore port terminal. Processes included in the model were: mine-to-port trains handling, port terminal equipment (processing rate, capacity, operating logic, failures) and ship loading.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129737147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721470
Soumyadip Ghosh, R. Pasupathy
The traditional estimator ξp, n for the p-quantile ξp of a random variable X, given n observations from the distribution of X, is obtained by inverting the empirical cumulative distribution function (cdf) constructed from the obtained observations. The estimator ξp, n requires O(n) storage, and it is well known that the mean squared error of ξp, n (with respect to p) decays as O(n-1). In this article, we present an alternative to ξp, n that seems to require dramatically less storage with negligible loss in convergence rate. The proposed estimator, ξp, n, relies on an alternative cdf that is constructed by accumulating the observed random variâtes into variable-sized bins that progressively become finer around the quantile. The size of the bins are strategically adjusted to ensure that the increased bias due to binning does not adversely affect the resulting convergence rate. We present an "online" version of the estimator ξp, n, along with a discussion of results on its consistency, convergence rates, and storage requirements. We also discuss analogous ideas for density estimation. We limit ourselves to heuristic arguments in support of the theoretical assertions we make, reserving more detailed proofs to a forthcoming paper.
{"title":"Low-storage online estimators for quantiles and densities","authors":"Soumyadip Ghosh, R. Pasupathy","doi":"10.1109/WSC.2013.6721470","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721470","url":null,"abstract":"The traditional estimator ξ<sub>p, n</sub> for the p-quantile ξ<sub>p</sub> of a random variable X, given n observations from the distribution of X, is obtained by inverting the empirical cumulative distribution function (cdf) constructed from the obtained observations. The estimator ξ<sub>p, n</sub> requires O(n) storage, and it is well known that the mean squared error of ξ<sub>p, n</sub> (with respect to <sub>p</sub>) decays as O(n<sup>-1</sup>). In this article, we present an alternative to ξ<sub>p, n</sub> that seems to require dramatically less storage with negligible loss in convergence rate. The proposed estimator, ξ<sub>p, n</sub>, relies on an alternative cdf that is constructed by accumulating the observed random variâtes into variable-sized bins that progressively become finer around the quantile. The size of the bins are strategically adjusted to ensure that the increased bias due to binning does not adversely affect the resulting convergence rate. We present an \"online\" version of the estimator ξ<sub>p, n</sub>, along with a discussion of results on its consistency, convergence rates, and storage requirements. We also discuss analogous ideas for density estimation. We limit ourselves to heuristic arguments in support of the theoretical assertions we make, reserving more detailed proofs to a forthcoming paper.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129510151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721442
C. G. Corlu, B. Biller
This paper considers a stochastic system simulation with unknown input distribution parameters and assumes the availability of a limited amount of historical data for parameter estimation. We investigate how to account for parameter uncertainty - the uncertainty that is due to the estimation of the input distribution parameters from historical data of finite length - in a subset selection procedure that identifies the stochastic system designs whose sample means are within a user-specified distance of the best mean performance measure. We show that even when the number of simulation replications is large enough for the stochastic uncertainty to be negligible, the amount of parameter uncertainty in output data imposes a threshold on the user-specified distance for an effective use of the subset selection procedure for simulation. We demonstrate the significance of this effect of parameter uncertainty for a multi-item inventory system simulation in the presence of short demand histories.
{"title":"A subset selection procedure under input parameter uncertainty","authors":"C. G. Corlu, B. Biller","doi":"10.1109/WSC.2013.6721442","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721442","url":null,"abstract":"This paper considers a stochastic system simulation with unknown input distribution parameters and assumes the availability of a limited amount of historical data for parameter estimation. We investigate how to account for parameter uncertainty - the uncertainty that is due to the estimation of the input distribution parameters from historical data of finite length - in a subset selection procedure that identifies the stochastic system designs whose sample means are within a user-specified distance of the best mean performance measure. We show that even when the number of simulation replications is large enough for the stochastic uncertainty to be negligible, the amount of parameter uncertainty in output data imposes a threshold on the user-specified distance for an effective use of the subset selection procedure for simulation. We demonstrate the significance of this effect of parameter uncertainty for a multi-item inventory system simulation in the presence of short demand histories.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124607105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721487
J. Kleijnen, E. Mehdad
A classic Kriging or Gaussian process (GP) metamodel estimates the variance of its predictor by plugging-in the estimated GP (hyper)parameters; namely, the mean, variance, and covariances. The problem is that this predictor variance is biased. To solve this problem for deterministic simulations, we propose “conditional simulation” (CS), which gives predictions at an old point that in all bootstrap samples equal the observed value. CS accounts for the randomness of the estimated GP parameters. We use the CS predictor variance in the “expected improvement” criterion of “efficient global optimization” (EGO). To quantify the resulting small-sample performance, we experiment with multi-modal test functions. Our main conclusion is that EGO with classic Kriging seems quite robust; EGO with CS only tends to perform better in expensive simulation with small samples.
{"title":"Conditional simulation for efficient global optimization","authors":"J. Kleijnen, E. Mehdad","doi":"10.1109/WSC.2013.6721487","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721487","url":null,"abstract":"A classic Kriging or Gaussian process (GP) metamodel estimates the variance of its predictor by plugging-in the estimated GP (hyper)parameters; namely, the mean, variance, and covariances. The problem is that this predictor variance is biased. To solve this problem for deterministic simulations, we propose “conditional simulation” (CS), which gives predictions at an old point that in all bootstrap samples equal the observed value. CS accounts for the randomness of the estimated GP parameters. We use the CS predictor variance in the “expected improvement” criterion of “efficient global optimization” (EGO). To quantify the resulting small-sample performance, we experiment with multi-modal test functions. Our main conclusion is that EGO with classic Kriging seems quite robust; EGO with CS only tends to perform better in expensive simulation with small samples.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115820393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721526
Shih-Hsien Tseng, T. Allen
The purpose of this article is to explore the interaction of two opposing forces. The forces of wealth accumulation in sturring job creation and the force of satisfaction and the “magic number” in causing job destruction are explored." An agent based model is proposed to explore the potentially competing effects of two hypothesized economic forces. The first is “trickle down” economics in which job creation occurs when wealth accumulates. The second is the “magic number” effect in which retirement occurs when wealth accumulates. Also, considered is the so-called “substitution effect” in which less is produced when the tax burden is considered to be too high. The “magic number” agent-based model proposed here is then explored using design of experiments. Three types of experiments were performed to explore (but not validate) the effects of assumed conditions on system gross domestic product (GDP) and tax revenue predicted after 50 years of operations.
{"title":"A magic number versus trickle down agent-based model of tax policy","authors":"Shih-Hsien Tseng, T. Allen","doi":"10.1109/WSC.2013.6721526","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721526","url":null,"abstract":"The purpose of this article is to explore the interaction of two opposing forces. The forces of wealth accumulation in sturring job creation and the force of satisfaction and the “magic number” in causing job destruction are explored.\" An agent based model is proposed to explore the potentially competing effects of two hypothesized economic forces. The first is “trickle down” economics in which job creation occurs when wealth accumulates. The second is the “magic number” effect in which retirement occurs when wealth accumulates. Also, considered is the so-called “substitution effect” in which less is produced when the tax burden is considered to be too high. The “magic number” agent-based model proposed here is then explored using design of experiments. Three types of experiments were performed to explore (but not validate) the effects of assumed conditions on system gross domestic product (GDP) and tax revenue predicted after 50 years of operations.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122282281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721453
Yuan Jun, S. Ng
Computer models are widely used to simulate complex and costly real processes and systems. In the calibration process of the computer model, the calibration parameters are adjusted to fit the model closely to the real observed data. As these calibration parameters are unknown and are estimated based on observed data, it is important to estimate it accurately and account for the estimation uncertainty in the subsequent use of the model. In this paper, we study in detail an empirical Bayes approach for stochastic computer model calibration that accounts for various uncertainties including the calibration parameter uncertainty, and propose an entropy based criterion to improve on the estimation of the calibration parameter. This criterion is also compared with the EIMSPE criterion.
{"title":"An entropy based sequential calibration approach for stochastic computer models","authors":"Yuan Jun, S. Ng","doi":"10.1109/WSC.2013.6721453","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721453","url":null,"abstract":"Computer models are widely used to simulate complex and costly real processes and systems. In the calibration process of the computer model, the calibration parameters are adjusted to fit the model closely to the real observed data. As these calibration parameters are unknown and are estimated based on observed data, it is important to estimate it accurately and account for the estimation uncertainty in the subsequent use of the model. In this paper, we study in detail an empirical Bayes approach for stochastic computer model calibration that accounts for various uncertainties including the calibration parameter uncertainty, and propose an entropy based criterion to improve on the estimation of the calibration parameter. This criterion is also compared with the EIMSPE criterion.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126562231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721551
A. Crooks, A. Hailegiorgis
The displacement of people in times of crisis represents a challenge for humanitarian assistance and disaster relief and stakeholder agencies. Major challenges include providing adequate security and medical facilities to displaced people. Within this paper, we develop a spatially explicit multi-agent system model that explores the spread of cholera in the Dadaab refugee camps, Kenya. A common characteristic of these camps is poor sanitation and housing conditions which contribute to frequent outbreaks of cholera. We model the spread of cholera by explicitly representing the interaction between humans (host) and their environment, and the spread of the epidemic. The results from the model show that the spread of cholera grows radially from contaminated water sources and can have an impact on service provision. Agents' social behavior and movements contribute to the spread of cholera to other camps where water sources were relatively safe.
{"title":"Disease modeling within refugee camps: A multi-agent systems approach","authors":"A. Crooks, A. Hailegiorgis","doi":"10.1109/WSC.2013.6721551","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721551","url":null,"abstract":"The displacement of people in times of crisis represents a challenge for humanitarian assistance and disaster relief and stakeholder agencies. Major challenges include providing adequate security and medical facilities to displaced people. Within this paper, we develop a spatially explicit multi-agent system model that explores the spread of cholera in the Dadaab refugee camps, Kenya. A common characteristic of these camps is poor sanitation and housing conditions which contribute to frequent outbreaks of cholera. We model the spread of cholera by explicitly representing the interaction between humans (host) and their environment, and the spread of the epidemic. The results from the model show that the spread of cholera grows radially from contaminated water sources and can have an impact on service provision. Agents' social behavior and movements contribute to the spread of cholera to other camps where water sources were relatively safe.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126840034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721646
Dallas Kuchel
Large scale evacuations can be extremely complex, requiring tremendous coordination and logistical support. Noncombatant Evacuation Operations (NEOs) present additional challenges of civil unrest and violence that congests the transportation network and can require military assistance to execute the evacuation. NEOs contain many moving parts and simultaneous processes including thousands of evacuees, vehicles, aircraft, and personnel tracking technology. Discrete event simulation is a technique well suited to handle the complex interactions between the entities and to analyze the behavior of the system. This paper describes the methodology used to analyze NEO by the Center for Army Analysis (CAA) and presents a case study that illustrates how modeling can be used to evaluate various courses of action and support decision making. When preparing to execute a NEO, decision makers use simulation modeling and analysis to evaluate evacuation timelines, allocate resources and lift assets, select safe haven locations, and determine support requirements for evacuees.
{"title":"Analyzing noncombatant evacuation operations using discrete event simulation","authors":"Dallas Kuchel","doi":"10.1109/WSC.2013.6721646","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721646","url":null,"abstract":"Large scale evacuations can be extremely complex, requiring tremendous coordination and logistical support. Noncombatant Evacuation Operations (NEOs) present additional challenges of civil unrest and violence that congests the transportation network and can require military assistance to execute the evacuation. NEOs contain many moving parts and simultaneous processes including thousands of evacuees, vehicles, aircraft, and personnel tracking technology. Discrete event simulation is a technique well suited to handle the complex interactions between the entities and to analyze the behavior of the system. This paper describes the methodology used to analyze NEO by the Center for Army Analysis (CAA) and presents a case study that illustrates how modeling can be used to evaluate various courses of action and support decision making. When preparing to execute a NEO, decision makers use simulation modeling and analysis to evaluate evacuation timelines, allocate resources and lift assets, select safe haven locations, and determine support requirements for evacuees.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126901509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-12-08DOI: 10.1109/WSC.2013.6721640
K. Altendorfer, Thomas Felberbauer, D. Gruber, Alexander Hübl
This paper presents a comprehensive simulation project in the area of an automotive supplier. The company produces car styling serial and original accessory parts made from plastic for internal and external applications in passenger cars. For the foaming division, which is identified as the bottleneck, different personnel and qualification scenarios, set-up optimizations and lot-sizing strategies are compared with the current situation. Key performance measures reported are inventory, tardiness and service level. The changes in organizational costs (e.g. employee training, additional employees, etc.), due to the scenarios, are not considered and are traded off with the logistical potential by the company itself. Results of the simulation study indicate that a combination of an additional fitter during night shift, minor reductions of set-up times and reduced lot-sizes leads to an inventory reduction of ~10.6% and a service level improvement of ~8% compared to the current situation.
{"title":"Application of a generic simulation model to optimize production and workforce planning at an automotive supplier","authors":"K. Altendorfer, Thomas Felberbauer, D. Gruber, Alexander Hübl","doi":"10.1109/WSC.2013.6721640","DOIUrl":"https://doi.org/10.1109/WSC.2013.6721640","url":null,"abstract":"This paper presents a comprehensive simulation project in the area of an automotive supplier. The company produces car styling serial and original accessory parts made from plastic for internal and external applications in passenger cars. For the foaming division, which is identified as the bottleneck, different personnel and qualification scenarios, set-up optimizations and lot-sizing strategies are compared with the current situation. Key performance measures reported are inventory, tardiness and service level. The changes in organizational costs (e.g. employee training, additional employees, etc.), due to the scenarios, are not considered and are traded off with the logistical potential by the company itself. Results of the simulation study indicate that a combination of an additional fitter during night shift, minor reductions of set-up times and reduced lot-sizes leads to an inventory reduction of ~10.6% and a service level improvement of ~8% compared to the current situation.","PeriodicalId":223717,"journal":{"name":"2013 Winter Simulations Conference (WSC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127095263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}