This paper considers the computation of the stationary queue length distribution in singleserver queues with level-dependent arrivals and disasters. We assume that service times follow a general distribution and therefore, we consider the stationary queue length distribution via an imbedded Markov chain. Because this imbedded Markov chain has infinitely many states, level dependence, and bidirectional jumps of levels, it is hard to compute the solution of the global balance equation exactly. We thus consider the augmented truncation approximation. In particular, we focus on the computation of the truncated state transition probability matrix of the imbedded Markov chain, assuming that the underlying continuous-time absorbing Markov chain during a service time is not uniformizable. Under some stability conditions, we develop a computational procedure for the truncated transition probability matrix, where the upper bound of errors owing to truncation can be set in advance. We also provide some numerical examples and demonstrate that our procedure works well.
{"title":"NUMERICAL IMPLEMENTATION OF THE AUGMENTED TRUNCATION APPROXIMATION TO SINGLE-SERVER QUEUES WITH LEVEL-DEPENDENT ARRIVALS AND DISASTERS","authors":"Masatoshi Kimura, T. Takine","doi":"10.15807/JORSJ.64.61","DOIUrl":"https://doi.org/10.15807/JORSJ.64.61","url":null,"abstract":"This paper considers the computation of the stationary queue length distribution in singleserver queues with level-dependent arrivals and disasters. We assume that service times follow a general distribution and therefore, we consider the stationary queue length distribution via an imbedded Markov chain. Because this imbedded Markov chain has infinitely many states, level dependence, and bidirectional jumps of levels, it is hard to compute the solution of the global balance equation exactly. We thus consider the augmented truncation approximation. In particular, we focus on the computation of the truncated state transition probability matrix of the imbedded Markov chain, assuming that the underlying continuous-time absorbing Markov chain during a service time is not uniformizable. Under some stability conditions, we develop a computational procedure for the truncated transition probability matrix, where the upper bound of errors owing to truncation can be set in advance. We also provide some numerical examples and demonstrate that our procedure works well.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46544585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Norito Minamikawa, Akiyoshi Shioura (Tokyo Institute of Technology) The concept of M-convex function gives a unified framework for discrete optimization problems with nonlinear objective functions such as the minimum convex cost flow problem and the convex resource allocation problem. M-convex function minimization is one of the most fundamental problems concerning M-convex functions. It is known that a minimizer of an M-convex function can be found by a steepest descent algorithm in a finite number of iterations. Recently, the exact number of iterations required by a basic steepest descent algorithm was obtained. Furthermore, it was shown that the trajectory of the solutions generated by the basic steepest descent algorithm is a geodesic between the initial solution and the nearest minimizer. In this paper, we give a simpler and shorter proof of this claim by refining the minimizer cut property. We also consider the minimization of a jump M-convex function, which is a generalization of M-convex function, and analyze the number of iterations required by the basic steepest descent algorithm. In particular, we show that the trajectory of the solutions generated by the algorithm is a geodesic between the initial solution and the nearest minimizer. N U M E R I C A L I M P L E M E N TAT I O N O F T H E A U G M E N T E D T R U N C A T I O N APPROXIMATION TO SINGLE-SERVER QUEUES WITH LEVEL-DEPENDENT ARRIVALS AND DISASTERS
{"title":"TIME BOUNDS OF BASIC STEEPEST DESCENT ALGORITHMS FOR M-CONVEX FUNCTION MINIMIZATION AND RELATED PROBLEMS","authors":"N. Minamikawa, A. Shioura","doi":"10.15807/JORSJ.64.45","DOIUrl":"https://doi.org/10.15807/JORSJ.64.45","url":null,"abstract":"Norito Minamikawa, Akiyoshi Shioura (Tokyo Institute of Technology) The concept of M-convex function gives a unified framework for discrete optimization problems with nonlinear objective functions such as the minimum convex cost flow problem and the convex resource allocation problem. M-convex function minimization is one of the most fundamental problems concerning M-convex functions. It is known that a minimizer of an M-convex function can be found by a steepest descent algorithm in a finite number of iterations. Recently, the exact number of iterations required by a basic steepest descent algorithm was obtained. Furthermore, it was shown that the trajectory of the solutions generated by the basic steepest descent algorithm is a geodesic between the initial solution and the nearest minimizer. In this paper, we give a simpler and shorter proof of this claim by refining the minimizer cut property. We also consider the minimization of a jump M-convex function, which is a generalization of M-convex function, and analyze the number of iterations required by the basic steepest descent algorithm. In particular, we show that the trajectory of the solutions generated by the algorithm is a geodesic between the initial solution and the nearest minimizer. N U M E R I C A L I M P L E M E N TAT I O N O F T H E A U G M E N T E D T R U N C A T I O N APPROXIMATION TO SINGLE-SERVER QUEUES WITH LEVEL-DEPENDENT ARRIVALS AND DISASTERS","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42878672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masaya Hasebe, K. Nonobe, Wei Wu, N. Katoh, Takahito Tanabe, A. Ikegami
When dealing with real-world problems, optimization models generally include only important structures and omit latent considerations that cannot be practically specified in advance. Therefore, it can be useful for optimization approaches to provide a “solution space” or “many solutions” containing a solution that the decision-maker is likely to accept. The nurse scheduling problem is an important problem in hospitals to maintain their quality of health care. Nowadays, given an instance, mathematical models can be applied to find optimal or near-optimal schedules within realistic computational times. However, even with the help of modern mathematical optimization systems, decision-makers must confirm the quality of obtained solutions and need to manually modify them into an acceptable form. Therefore, general optimization algorithms that provide insufficient information for effective modifications remain impractical for use in many hospitals in Japan. To improve this situation, we propose a method for a pattern-based formulation to generate information helpful in most practical cases in hospitals and other care facilities in Japan. This approach involves generating many optimal solutions and analyzing their features. Computational results show that the proposed approach provides useful information within a reasonable computational time.
{"title":"GENERATING DECISION SUPPORT INFORMATION FOR NURSE SCHEDULING INCLUDING EFFECTIVE MODIFICATIONS OF SOLUTIONS","authors":"Masaya Hasebe, K. Nonobe, Wei Wu, N. Katoh, Takahito Tanabe, A. Ikegami","doi":"10.15807/JORSJ.64.109","DOIUrl":"https://doi.org/10.15807/JORSJ.64.109","url":null,"abstract":"When dealing with real-world problems, optimization models generally include only important structures and omit latent considerations that cannot be practically specified in advance. Therefore, it can be useful for optimization approaches to provide a “solution space” or “many solutions” containing a solution that the decision-maker is likely to accept. The nurse scheduling problem is an important problem in hospitals to maintain their quality of health care. Nowadays, given an instance, mathematical models can be applied to find optimal or near-optimal schedules within realistic computational times. However, even with the help of modern mathematical optimization systems, decision-makers must confirm the quality of obtained solutions and need to manually modify them into an acceptable form. Therefore, general optimization algorithms that provide insufficient information for effective modifications remain impractical for use in many hospitals in Japan. To improve this situation, we propose a method for a pattern-based formulation to generate information helpful in most practical cases in hospitals and other care facilities in Japan. This approach involves generating many optimal solutions and analyzing their features. Computational results show that the proposed approach provides useful information within a reasonable computational time.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49558997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The maximum budget allocation (MBA) problem is the problem of allocating items to agents so as to maximize the total payment from all agents, where the payment from an agent is the sum of prices of the items allocated to that agent, capped by the agent’s budget. In this study, we consider a generalization of the MBA problem in which each item has a capacity constraint, and present two approximation algorithms for it. The first is a polynomial bicriteria algorithm that is guaranteed to output an allocation producing at least 1 − r times the optimal feasible total payment, where r is the maximum ratio of price to budget, and to violate the capacity constraints on items by at most a factor of 2. The other is a pseudo-polynomial algorithm with approximation ratio 1/3 · (1− r/4) that always outputs a feasible allocation.
{"title":"APPROXIMATION ALGORITHMS FOR A GENERALIZATION OF THE MAXIMUM BUDGET ALLOCATION","authors":"Takuro Fukunaga","doi":"10.15807/JORSJ.64.31","DOIUrl":"https://doi.org/10.15807/JORSJ.64.31","url":null,"abstract":"The maximum budget allocation (MBA) problem is the problem of allocating items to agents so as to maximize the total payment from all agents, where the payment from an agent is the sum of prices of the items allocated to that agent, capped by the agent’s budget. In this study, we consider a generalization of the MBA problem in which each item has a capacity constraint, and present two approximation algorithms for it. The first is a polynomial bicriteria algorithm that is guaranteed to output an allocation producing at least 1 − r times the optimal feasible total payment, where r is the maximum ratio of price to budget, and to violate the capacity constraints on items by at most a factor of 2. The other is a pseudo-polynomial algorithm with approximation ratio 1/3 · (1− r/4) that always outputs a feasible allocation.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"64 1","pages":"31-44"},"PeriodicalIF":0.0,"publicationDate":"2021-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45120159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this study is to reveal how marketing affects customers’ purchase amount and number of visits in Japanese department stores. We model purchase amounts by using a hierarchical Bayes regression model and number of visits by using a hierarchical Bayes Poisson regression model. Furthermore, we estimate the latent factor behind price as the purchase amount per month with a Type-1 Tobit model and the structural heterogeneity of each customer with a model for variable selection. Direct mail and events are used as marketing measures. The analytical results reveal marketing measures that raise customers’ final purchase amounts.
{"title":"AN ANALYSIS OF MECHANISM FOR CUSTOMERS' PURCHASE AMOUNT AND NUMBER OF VISITS IN DEPARTMENT STORE","authors":"Hiroki Yamada, Tadahiko Sato","doi":"10.15807/JORSJ.64.12","DOIUrl":"https://doi.org/10.15807/JORSJ.64.12","url":null,"abstract":"The purpose of this study is to reveal how marketing affects customers’ purchase amount and number of visits in Japanese department stores. We model purchase amounts by using a hierarchical Bayes regression model and number of visits by using a hierarchical Bayes Poisson regression model. Furthermore, we estimate the latent factor behind price as the purchase amount per month with a Type-1 Tobit model and the structural heterogeneity of each customer with a model for variable selection. Direct mail and events are used as marketing measures. The analytical results reveal marketing measures that raise customers’ final purchase amounts.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42041296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The information diffusion game, which is a type of non-cooperative game, models the diffusion process of information in networks for several competitive firms that want to spread their information. Recently, the game on weighted graphs was introduced and pure Nash equilibria for the game were discussed. This paper gives a full characterization of the existence of pure Nash equilibria for the game on weighted cycles and paths according to the number of vertices, the number of players and weight classes.
{"title":"NASH EQUILIBRIA FOR INFORMATION DIFFUSION GAMES ON WEIGHTED CYCLES AND PATHS","authors":"Tianyang Li, Maiko Shigeno","doi":"10.15807/JORSJ.64.1","DOIUrl":"https://doi.org/10.15807/JORSJ.64.1","url":null,"abstract":"The information diffusion game, which is a type of non-cooperative game, models the diffusion process of information in networks for several competitive firms that want to spread their information. Recently, the game on weighted graphs was introduced and pure Nash equilibria for the game were discussed. This paper gives a full characterization of the existence of pure Nash equilibria for the game on weighted cycles and paths according to the number of vertices, the number of players and weight classes.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48913899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The asset allocation strategy is important to manage assets effectively. In recent years, the risk parity strategy has become attractive to academics and practitioners. The risk parity strategy determines the allocation for asset classes in order to equalize their contributions to overall portfolio risk. Roncalli and Weisang (2016) propose the use of risk factors" instead of asset classes. This approach achieves the portfolio diversi(cid:12)cation based on the decomposition of portfolio risk into risk factor contribution. The factor-based risk parity approach can diversify across the true sources of risk whereas the asset-class-based approach may lead to solutions with hidden risk concentration. However, it has some shortcomings. In our paper, we propose a methodology of constructing the well-balanced portfolio by the mixture of asset-class-based and factor-based risk parity approaches. We also propose the method of determining the weight of two approaches using the diversi(cid:12)cation index. We can construct the portfolio dynamically controlled with the weight which is adjusted in response to market environment. We examine the characteristics of the model through the numerical tests with seven global (cid:12)nancial indices and three factors. We (cid:12)nd it gives the well-balanced portfolio between asset and factor diversi(cid:12)cations. We also implement the backtest from 2005 to 2018, and the performances are measured on a USD basis. We (cid:12)nd our method decreases standard deviation of return and downside risk, and it has a higher Sharpe ratio than other portfolio strategies. These results show our new method has practical advantages.
{"title":"ASSET ALLOCATION WITH ASSET-CLASS-BASED AND FACTOR-BASED RISK PARITY APPROACHES","authors":"H. Kato, Norio Hibiki","doi":"10.15807/jorsj.63.93","DOIUrl":"https://doi.org/10.15807/jorsj.63.93","url":null,"abstract":"The asset allocation strategy is important to manage assets effectively. In recent years, the risk parity strategy has become attractive to academics and practitioners. The risk parity strategy determines the allocation for asset classes in order to equalize their contributions to overall portfolio risk. Roncalli and Weisang (2016) propose the use of risk factors\" instead of asset classes. This approach achieves the portfolio diversi(cid:12)cation based on the decomposition of portfolio risk into risk factor contribution. The factor-based risk parity approach can diversify across the true sources of risk whereas the asset-class-based approach may lead to solutions with hidden risk concentration. However, it has some shortcomings. In our paper, we propose a methodology of constructing the well-balanced portfolio by the mixture of asset-class-based and factor-based risk parity approaches. We also propose the method of determining the weight of two approaches using the diversi(cid:12)cation index. We can construct the portfolio dynamically controlled with the weight which is adjusted in response to market environment. We examine the characteristics of the model through the numerical tests with seven global (cid:12)nancial indices and three factors. We (cid:12)nd it gives the well-balanced portfolio between asset and factor diversi(cid:12)cations. We also implement the backtest from 2005 to 2018, and the performances are measured on a USD basis. We (cid:12)nd our method decreases standard deviation of return and downside risk, and it has a higher Sharpe ratio than other portfolio strategies. These results show our new method has practical advantages.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47721162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a necessary optimality condition derived by a limit operation in projective space for optimization problems of polynomial functions with constraints given as polynomial equations. The proposed condition is more general than the Karush-Kuhn-Tucker (KKT) conditions in the sense that no constraint qualification is required, which means the condition can be viewed as a necessary optimality condition for every minimizer. First, a sequential optimality condition for every minimizer is introduced on the basis of the quadratic penalty function method. To perform a limit operation in the sequential optimality condition, we next introduce the concept of projective space, which can be regarded as a union of Euclidian space and its points at infinity. Through the projective space, the limit operation can be reduced to computing a point of the tangent cone at the origin. Mathematical tools from algebraic geometry were used to compute the set of equations satisfied by all points in the tangent cone, and thus by all minimizers. Examples are provided to clarify the methodology and to demonstrate cases where some local minimizers do not satisfy the KKT conditions.
{"title":"LIMIT OPERATION IN PROJECTIVE SPACE FOR CONSTRUCTING NECESSARY OPTIMALITY CONDITION OF POLYNOMIAL OPTIMIZATION PROBLEM","authors":"Tomoyuki Iori, T. Ohtsuka","doi":"10.15807/jorsj.63.114","DOIUrl":"https://doi.org/10.15807/jorsj.63.114","url":null,"abstract":"This paper proposes a necessary optimality condition derived by a limit operation in projective space for optimization problems of polynomial functions with constraints given as polynomial equations. The proposed condition is more general than the Karush-Kuhn-Tucker (KKT) conditions in the sense that no constraint qualification is required, which means the condition can be viewed as a necessary optimality condition for every minimizer. First, a sequential optimality condition for every minimizer is introduced on the basis of the quadratic penalty function method. To perform a limit operation in the sequential optimality condition, we next introduce the concept of projective space, which can be regarded as a union of Euclidian space and its points at infinity. Through the projective space, the limit operation can be reduced to computing a point of the tangent cone at the origin. Mathematical tools from algebraic geometry were used to compute the set of equations satisfied by all points in the tangent cone, and thus by all minimizers. Examples are provided to clarify the methodology and to demonstrate cases where some local minimizers do not satisfy the KKT conditions.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48414025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diverse accelerated first-order methods have recently received considerable attention for solving large-scale convex optimization problems. This short paper shows that an exiting accelerated proximal gradient method for solving quasi-static incremental elastoplastic problems with the von Mises yield criterion can be naturally extended to the Tresca yield criterion.
{"title":"A NOTE ON ACCELERATED PROXIMAL GRADIENT METHOD FOR ELASTOPLASTIC ANALYSIS WITH TRESCA YIELD CRITERION","authors":"Wataru Shimizu, Y. Kanno","doi":"10.15807/jorsj.63.78","DOIUrl":"https://doi.org/10.15807/jorsj.63.78","url":null,"abstract":"Diverse accelerated first-order methods have recently received considerable attention for solving large-scale convex optimization problems. This short paper shows that an exiting accelerated proximal gradient method for solving quasi-static incremental elastoplastic problems with the von Mises yield criterion can be naturally extended to the Tresca yield criterion.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47117422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The clustered traveling salesman problem (CTSP) is a generalization of the traveling salesman problem (TSP) in which the set of cities is divided into clusters and the salesman must consecutively visit the cities of each cluster. It is well known that TSP is NP-hard, and hence CTSP is NP-hard as well. Guttmann-Beck et al. (2000) designed approximation algorithms for several variants of CTSP by decomposing it into subproblems including the traveling salesman path problem (TSPP). In this paper, we improve approximation ratios by applying a recent improved approximation algorithm for TSPP by Zenklusen (2019).
{"title":"IMPROVING APPROXIMATION RATIOS FOR THE CLUSTERED TRAVELING SALESMAN PROBLEM","authors":"Masamune Kawasaki, Kenjiro Takazawa","doi":"10.15807/jorsj.63.60","DOIUrl":"https://doi.org/10.15807/jorsj.63.60","url":null,"abstract":"The clustered traveling salesman problem (CTSP) is a generalization of the traveling salesman problem (TSP) in which the set of cities is divided into clusters and the salesman must consecutively visit the cities of each cluster. It is well known that TSP is NP-hard, and hence CTSP is NP-hard as well. Guttmann-Beck et al. (2000) designed approximation algorithms for several variants of CTSP by decomposing it into subproblems including the traveling salesman path problem (TSPP). In this paper, we improve approximation ratios by applying a recent improved approximation algorithm for TSPP by Zenklusen (2019).","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"63 1","pages":"60-70"},"PeriodicalIF":0.0,"publicationDate":"2020-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45210846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}