Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822168
A. Skabar
Given data in a matrix X in which rows represent vectors and columns comprise a mix of discrete and continuous variables, the method presented in this paper can be used to generate random vectors whose elements display the same marginal distributions and correlations as the variables in X. The data is represented as a bipartite graph consisting of object nodes (representing vectors) and attribute value nodes. Random walk can be used to estimate the distribution of a target variable conditioned on the remaining variables, allowing a random value to be drawn for that variable. This leads to the use of Gibbs sampling to generate entire vectors. Unlike conventional methods, the proposed method requires neither the joint distribution nor the correlations to be specified, learned, or modeled explicitly in any way. Application to the Australian Credit dataset demonstrates the feasibility of the approach in generating random vectors on challenging real-world datasets.
{"title":"Random vector generation from mixed-attribute datasets using random walk","authors":"A. Skabar","doi":"10.1109/WSC.2016.7822168","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822168","url":null,"abstract":"Given data in a matrix X in which rows represent vectors and columns comprise a mix of discrete and continuous variables, the method presented in this paper can be used to generate random vectors whose elements display the same marginal distributions and correlations as the variables in X. The data is represented as a bipartite graph consisting of object nodes (representing vectors) and attribute value nodes. Random walk can be used to estimate the distribution of a target variable conditioned on the remaining variables, allowing a random value to be drawn for that variable. This leads to the use of Gibbs sampling to generate entire vectors. Unlike conventional methods, the proposed method requires neither the joint distribution nor the correlations to be specified, learned, or modeled explicitly in any way. Application to the Australian Credit dataset demonstrates the feasibility of the approach in generating random vectors on challenging real-world datasets.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125497727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822126
Wenyu Wang, H. Wan, Kuo-Hao Chang
STRONG is a response surface methodology based algorithm that iteratively constructs linear or quadratic fitness model to guide the searching direction within the trust region. Despite its elegance and convergence, one bottleneck of the original STRONG in high-dimensional problems is the high cost per iteration. This paper proposes a new algorithm, RBC-STRONG, that extends the STRONG algorithm with the Random Coordinate Descent optimization framework. We proposed a RBC-STRONG algorithm and proved its convergence property. Our numerical experiments also show that RBC-STRONG achieves better computational performance than existing methods.
{"title":"Randomized block coordinate descendant STRONG for large-scale Stochastic Optimization","authors":"Wenyu Wang, H. Wan, Kuo-Hao Chang","doi":"10.1109/WSC.2016.7822126","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822126","url":null,"abstract":"STRONG is a response surface methodology based algorithm that iteratively constructs linear or quadratic fitness model to guide the searching direction within the trust region. Despite its elegance and convergence, one bottleneck of the original STRONG in high-dimensional problems is the high cost per iteration. This paper proposes a new algorithm, RBC-STRONG, that extends the STRONG algorithm with the Random Coordinate Descent optimization framework. We proposed a RBC-STRONG algorithm and proved its convergence property. Our numerical experiments also show that RBC-STRONG achieves better computational performance than existing methods.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129980321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822161
G. Curry, A. Banerjee, H. Moya, H. L. Jones
A discrete-event simulation language was implemented in MATLAB. The approach is similar to the process/command modeling paradigm utilized in GPSS and other languages that followed. The language is a MATLAB Script File (m-file) and can be part of a larger analysis package as a sub-function of an optimization/simulation system. The modeler builds the simulation through support functions provided in this system but must insert them in the proper locations of the MATLAB master function. To develop a proper model, it is necessary to understand the internal simulation structure using the switch/cases statement and where various aspects of the simulation structure are located. To simplify this process, a model generator has been developed which parses a model text file and produces the required MATLAB master simulation function. The model generator also reduces the magnitude of understanding of the implementation specifics of the MATLAB simulation language and makes proper model development easier.
{"title":"A modeling language generator for a discrete event simulation language in MATLAB","authors":"G. Curry, A. Banerjee, H. Moya, H. L. Jones","doi":"10.1109/WSC.2016.7822161","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822161","url":null,"abstract":"A discrete-event simulation language was implemented in MATLAB. The approach is similar to the process/command modeling paradigm utilized in GPSS and other languages that followed. The language is a MATLAB Script File (m-file) and can be part of a larger analysis package as a sub-function of an optimization/simulation system. The modeler builds the simulation through support functions provided in this system but must insert them in the proper locations of the MATLAB master function. To develop a proper model, it is necessary to understand the internal simulation structure using the switch/cases statement and where various aspects of the simulation structure are located. To simplify this process, a model generator has been developed which parses a model text file and produces the required MATLAB master simulation function. The model generator also reduces the magnitude of understanding of the implementation specifics of the MATLAB simulation language and makes proper model development easier.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129701043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822104
Lucy E. Morgan, A. Titman, D. Worthington, B. Nelson
Input uncertainty (IU) is the outcome of driving simulation models using input distributions estimated by finite amounts of real-world data. Methods have been presented for quantifying IU when stationary input distributions are used. In this paper we extend upon this work and provide two methods for quantifying IU in simulation models driven by piecewise-constant non-stationary Poisson arrival processes. Numerical evaluation and illustrations of the methods are provided and indicate that the methods perform well.
{"title":"Input uncertainty quantification for simulation models with piecewise-constant non-stationary Poisson arrival processes","authors":"Lucy E. Morgan, A. Titman, D. Worthington, B. Nelson","doi":"10.1109/WSC.2016.7822104","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822104","url":null,"abstract":"Input uncertainty (IU) is the outcome of driving simulation models using input distributions estimated by finite amounts of real-world data. Methods have been presented for quantifying IU when stationary input distributions are used. In this paper we extend upon this work and provide two methods for quantifying IU in simulation models driven by piecewise-constant non-stationary Poisson arrival processes. Numerical evaluation and illustrations of the methods are provided and indicate that the methods perform well.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128390433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822296
Yen-Shao Chen, Cheng-Hung Wu, Shi-Chung Chang
Advancements in communication and computing technologies have made promising the decentralized control of automated material handling systems (AMHS) to alleviate blocking and congestion of production flows and raise productivity in an automated large-scale factory. With the growing availability of edge computing and low-cost mobile communications, either among vehicles (V2V) or between vehicles and machines (V2M), decentralized vehicle control may exploit frequent and low latency exchanges of neighborhood information and local control computation to increase AMHS operation efficiency. In this study, a decentralized control algorithm design, BALI (blocking avoidance by exploiting location information) algorithm, exploits V2X exchanges of local information for transport job matching, blocking inference, and job exchange for vehicle dispatching in AMHS. Performance evaluation of the BALI algorithm by discrete-event simulation shows that the BALI algorithm can significantly reduce blocking and congestion in production flows as compared to commonly used Nearest Job First rule-based heuristics.
通信和计算技术的进步使自动化物料处理系统(AMHS)的分散控制成为可能,以缓解生产流程的阻塞和拥堵,提高自动化大型工厂的生产率。随着边缘计算和低成本移动通信的日益普及,无论是车辆之间(V2V)还是车辆与机器之间(V2M),分散的车辆控制可以利用频繁和低延迟的邻居信息交换和本地控制计算来提高AMHS的运行效率。本研究采用一种分散控制算法设计BALI (blocking avoidance by exploit location information)算法,利用V2X本地信息交换实现AMHS中运输作业匹配、阻塞推理和车辆调度的作业交换。通过离散事件模拟对BALI算法的性能评估表明,与常用的基于最近作业优先规则的启发式算法相比,BALI算法可以显著减少生产流中的阻塞和拥塞。
{"title":"Decentralized dispatching for blocking avoidance in automate material handling systems","authors":"Yen-Shao Chen, Cheng-Hung Wu, Shi-Chung Chang","doi":"10.1109/WSC.2016.7822296","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822296","url":null,"abstract":"Advancements in communication and computing technologies have made promising the decentralized control of automated material handling systems (AMHS) to alleviate blocking and congestion of production flows and raise productivity in an automated large-scale factory. With the growing availability of edge computing and low-cost mobile communications, either among vehicles (V2V) or between vehicles and machines (V2M), decentralized vehicle control may exploit frequent and low latency exchanges of neighborhood information and local control computation to increase AMHS operation efficiency. In this study, a decentralized control algorithm design, BALI (blocking avoidance by exploiting location information) algorithm, exploits V2X exchanges of local information for transport job matching, blocking inference, and job exchange for vehicle dispatching in AMHS. Performance evaluation of the BALI algorithm by discrete-event simulation shows that the BALI algorithm can significantly reduce blocking and congestion in production flows as compared to commonly used Nearest Job First rule-based heuristics.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125075252","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822201
Soroosh Gholami, H. Sarjoughian
This paper proposes a multi-resolution co-design modeling approach where hardware and software parts of systems are loosely represented and composable. This approach is shown for Network-on-Chips (NoC) where the network software directs communications among switches, links, and interfaces. The complexity of such systems can be better tamed by modeling frameworks for which multi-resolution model abstractions along system's hardware and software dimensions are separately specified. Such frameworks build on hierarchical, component-based modeling principles and methods. Hybrid model composition establishes relationships across models while multi-resolution models can be better specified by separately accounting for multiple levels of hardware and software abstractions. For Network-on-Chip, the abstraction levels are interface, capacity, flit, and hardware with resolutions defined in terms of object, temporal, process, and spatial aspects. The proposed modeling approach benefits from co-design and multi-resolution modeling in order to better manage rich dynamics of hardware and software parts of systems and their network-based interactions.
{"title":"Multi-resolution co-design modeling: A Network-on-Chip model","authors":"Soroosh Gholami, H. Sarjoughian","doi":"10.1109/WSC.2016.7822201","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822201","url":null,"abstract":"This paper proposes a multi-resolution co-design modeling approach where hardware and software parts of systems are loosely represented and composable. This approach is shown for Network-on-Chips (NoC) where the network software directs communications among switches, links, and interfaces. The complexity of such systems can be better tamed by modeling frameworks for which multi-resolution model abstractions along system's hardware and software dimensions are separately specified. Such frameworks build on hierarchical, component-based modeling principles and methods. Hybrid model composition establishes relationships across models while multi-resolution models can be better specified by separately accounting for multiple levels of hardware and software abstractions. For Network-on-Chip, the abstraction levels are interface, capacity, flit, and hardware with resolutions defined in terms of object, temporal, process, and spatial aspects. The proposed modeling approach benefits from co-design and multi-resolution modeling in order to better manage rich dynamics of hardware and software parts of systems and their network-based interactions.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130601597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822373
A. Guazzini, M. Duradoni, G. Gronchi
Social loafing and free riding are common phenomena that may hinder crowdsourcing. The purpose of this work is to identify the minimum conditions that can promote cooperation and group problem solving avoiding free riding and social loafing. We assume two kinds of scenarios (Recipe A, free riders have access to benefits produced by groups and Recipe B, the benefit produced by groups are shared only within the group) and then we investigate the relationship among the tendency to cooperate, group sizes, and difficulty of the task by means of numerical simulations. Results indicate that in the Recipe A world, collective intelligence and crowdsourcing are generally less efficient compared to what observed in the Recipe B world. Indeed, in the latter cooperation appears to be the optimal strategy for the progress of the world. Given the social importance of crowdsourcing, we discuss some useful implications of our results on crowdsourcing projects.
{"title":"The selfish vaccine Recipe: A simple mechanism for avoiding free-riding","authors":"A. Guazzini, M. Duradoni, G. Gronchi","doi":"10.1109/WSC.2016.7822373","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822373","url":null,"abstract":"Social loafing and free riding are common phenomena that may hinder crowdsourcing. The purpose of this work is to identify the minimum conditions that can promote cooperation and group problem solving avoiding free riding and social loafing. We assume two kinds of scenarios (Recipe A, free riders have access to benefits produced by groups and Recipe B, the benefit produced by groups are shared only within the group) and then we investigate the relationship among the tendency to cooperate, group sizes, and difficulty of the task by means of numerical simulations. Results indicate that in the Recipe A world, collective intelligence and crowdsourcing are generally less efficient compared to what observed in the Recipe B world. Indeed, in the latter cooperation appears to be the optimal strategy for the progress of the world. Given the social importance of crowdsourcing, we discuss some useful implications of our results on crowdsourcing projects.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131299498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822156
Alessandro Pellegrini, F. Quaglia, Cristina Montañola-Sales, Josep Casanovas-García
Agent-based modeling and simulation is a versatile and promising methodology to capture complex interactions among entities and their surrounding environment. A great advantage is its ability to model phenomena at a macro scale by exploiting simpler descriptions at a micro level. It has been proven effective in many fields, and it is rapidly becoming a de-facto standard in the study of population dynamics. In this article we study programmability and performance aspects of the last-generation ROOT-Sim speculative PDES environment for multi/many-core shared-memory architectures. ROOT-Sim transparently offers a programming model where interactions can be based on both explicit message passing and in-place state accesses. We introduce programming guidelines for systematic exploitation of these facilities in agent-based simulations, and we study the effects on performance of an innovative load-sharing policy targeting these types of dependencies. An experimental assessment with synthetic and real-world applications is provided, to assess the validity of our proposal.
{"title":"Programming agent-based demographic models with cross-state and message-exchange dependencies: A study with speculative PDES and automatic load-sharing","authors":"Alessandro Pellegrini, F. Quaglia, Cristina Montañola-Sales, Josep Casanovas-García","doi":"10.1109/WSC.2016.7822156","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822156","url":null,"abstract":"Agent-based modeling and simulation is a versatile and promising methodology to capture complex interactions among entities and their surrounding environment. A great advantage is its ability to model phenomena at a macro scale by exploiting simpler descriptions at a micro level. It has been proven effective in many fields, and it is rapidly becoming a de-facto standard in the study of population dynamics. In this article we study programmability and performance aspects of the last-generation ROOT-Sim speculative PDES environment for multi/many-core shared-memory architectures. ROOT-Sim transparently offers a programming model where interactions can be based on both explicit message passing and in-place state accesses. We introduce programming guidelines for systematic exploitation of these facilities in agent-based simulations, and we study the effects on performance of an innovative load-sharing policy targeting these types of dependencies. An experimental assessment with synthetic and real-world applications is provided, to assess the validity of our proposal.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"378 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115916080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822341
Joseph C. Hoecherl, M. Robbins, R. Hill, D. Ahner
We consider the problem of making accession and promotion decisions in the United States Air Force officer sustainment system. Accession decisions determine how many officers should be hired into the system at the lowest grade for each career specialty. Promotion decisions determine how many officers should be promoted to the next highest grade. We formulate a Markov decision process model to examine this military workforce planning problem. The large size of the problem instance motivating this research suggests that classical exact dynamic programming methods are inappropriate. As such, we develop and test approximate dynamic programming (ADP) algorithms to determine high-quality personnel policies relative to current practice. Our best ADP algorithm attains a statistically significant 2.8 percent improvement over the sustainment line policy currently employed by the USAF which serves as the benchmark policy.
{"title":"Approximate dynamic programming algorithms for United States Air Force officer sustainment","authors":"Joseph C. Hoecherl, M. Robbins, R. Hill, D. Ahner","doi":"10.1109/WSC.2016.7822341","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822341","url":null,"abstract":"We consider the problem of making accession and promotion decisions in the United States Air Force officer sustainment system. Accession decisions determine how many officers should be hired into the system at the lowest grade for each career specialty. Promotion decisions determine how many officers should be promoted to the next highest grade. We formulate a Markov decision process model to examine this military workforce planning problem. The large size of the problem instance motivating this research suggests that classical exact dynamic programming methods are inappropriate. As such, we develop and test approximate dynamic programming (ADP) algorithms to determine high-quality personnel policies relative to current practice. Our best ADP algorithm attains a statistically significant 2.8 percent improvement over the sustainment line policy currently employed by the USAF which serves as the benchmark policy.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122419340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2016-12-11DOI: 10.1109/WSC.2016.7822148
J. Branke, Wen Zhang, Yang Tao
In this paper, we propose a myopic ranking and selection procedures for the multi-objective case. Whereas most publications for multi-objective problems aim at maximizing the probability of correctly selecting all Pareto optimal solutions, we suggest minimizing the difference in hypervolume between the observed means of the perceived Pareto front and the true Pareto front as a new performance measure. We argue that this hypervolume difference is often more relevant for a decision maker. Empirical tests show that the proposed method performs well with respect to the stated hypervolume objective.
{"title":"Multio-bjective ranking and selection based on hypervolume","authors":"J. Branke, Wen Zhang, Yang Tao","doi":"10.1109/WSC.2016.7822148","DOIUrl":"https://doi.org/10.1109/WSC.2016.7822148","url":null,"abstract":"In this paper, we propose a myopic ranking and selection procedures for the multi-objective case. Whereas most publications for multi-objective problems aim at maximizing the probability of correctly selecting all Pareto optimal solutions, we suggest minimizing the difference in hypervolume between the observed means of the perceived Pareto front and the true Pareto front as a new performance measure. We argue that this hypervolume difference is often more relevant for a decision maker. Empirical tests show that the proposed method performs well with respect to the stated hypervolume objective.","PeriodicalId":367269,"journal":{"name":"2016 Winter Simulation Conference (WSC)","volume":"88 7","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120893757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}