Yannan Hu, Sho Fukatsu, H. Hashimoto, S. Imahori, M. Yagiura
The two-dimensional strip packing problem arises in wide variety of industrial applications. In this paper, we focus on the bitmap shape packing problem in which a set of arbitrarily shaped objects represented in bitmap format should be packed into a larger rectangular container without overlap. The complex geometry of bitmap shapes and the large amount of data to be processed make it difficult to check for overlaps. In this paper, we propose an efficient method for checking for overlaps and design efficient implementations of two construction algorithms, which are based on the bottom-left strategy. In this strategy, starting from an empty layout, items are packed into the container one by one. Each item is placed in the lowest position where there is no overlap relative to the current layout. We consider two algorithms, the bottom-left and the best-fit algorithm, which adopt this strategy. The computational results for a series of instances that are generated from well-known benchmark instances show that the proposed algorithms obtain good solutions in remarkably short time and are especially effective for large-scale instances.
{"title":"EFFICIENT OVERLAP DETECTION AND CONSTRUCTION ALGORITHMS FOR THE BITMAP SHAPE PACKING PROBLEM","authors":"Yannan Hu, Sho Fukatsu, H. Hashimoto, S. Imahori, M. Yagiura","doi":"10.15807/JORSJ.61.132","DOIUrl":"https://doi.org/10.15807/JORSJ.61.132","url":null,"abstract":"The two-dimensional strip packing problem arises in wide variety of industrial applications. In this paper, we focus on the bitmap shape packing problem in which a set of arbitrarily shaped objects represented in bitmap format should be packed into a larger rectangular container without overlap. The complex geometry of bitmap shapes and the large amount of data to be processed make it difficult to check for overlaps. In this paper, we propose an efficient method for checking for overlaps and design efficient implementations of two construction algorithms, which are based on the bottom-left strategy. In this strategy, starting from an empty layout, items are packed into the container one by one. Each item is placed in the lowest position where there is no overlap relative to the current layout. We consider two algorithms, the bottom-left and the best-fit algorithm, which adopt this strategy. The computational results for a series of instances that are generated from well-known benchmark instances show that the proposed algorithms obtain good solutions in remarkably short time and are especially effective for large-scale instances.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"61 1","pages":"132-150"},"PeriodicalIF":0.0,"publicationDate":"2018-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.61.132","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44193940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In some localities, a large-scale chain retailer competes against a small-scale local independent retailer that specializes in, for instance, vegetables, fruits, and flowers produced locally for local consumption. The former usually attracts consumers by emphasizing its width and depth of products variety, whereas the latter seeks to overcome its limited products assortment by offering lower prices for them than the chain store. This is possible for the local store partly because of lower labor costs and for various other reasons. This study employs the Hotelling unit interval to examine price competition in a duopoly featuring one large-scale chain retailer and one local retailer. To express differences in their product assortments, we assume that the large-scale retailer denoted by A sells two types of product, G1 and G2, whereas the local retailer denoted by B sells only G1. Moreover, we assume that all the consumers purchase G1 at A or B after comparing prices and buy G2 at A on an as-needed basis. We examine both Nash and Stackelberg equilibrium to indicate that the local retailer can survive competition with the large-scale chain retailer even if all the consumers purchase both G1 and G2. We also reveal that a monopolistic market structure, not duopoly, can optimize the social welfare if consumers always purchase both G1 and G2.
{"title":"PRICE COMPETITION AND SOCIAL WELFARE COMPARISONS BETWEEN LARGE-SCALE AND SMALL-SCALE RETAILERS","authors":"H. Sandoh, Risa Suzuki","doi":"10.15807/JORSJ.61.40","DOIUrl":"https://doi.org/10.15807/JORSJ.61.40","url":null,"abstract":"In some localities, a large-scale chain retailer competes against a small-scale local independent retailer that specializes in, for instance, vegetables, fruits, and flowers produced locally for local consumption. The former usually attracts consumers by emphasizing its width and depth of products variety, whereas the latter seeks to overcome its limited products assortment by offering lower prices for them than the chain store. This is possible for the local store partly because of lower labor costs and for various other reasons. This study employs the Hotelling unit interval to examine price competition in a duopoly featuring one large-scale chain retailer and one local retailer. To express differences in their product assortments, we assume that the large-scale retailer denoted by A sells two types of product, G1 and G2, whereas the local retailer denoted by B sells only G1. Moreover, we assume that all the consumers purchase G1 at A or B after comparing prices and buy G2 at A on an as-needed basis. We examine both Nash and Stackelberg equilibrium to indicate that the local retailer can survive competition with the large-scale chain retailer even if all the consumers purchase both G1 and G2. We also reveal that a monopolistic market structure, not duopoly, can optimize the social welfare if consumers always purchase both G1 and G2.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"61 1","pages":"40-52"},"PeriodicalIF":0.0,"publicationDate":"2018-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.61.40","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47706449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A continuous-time barrier option model is developed for valuing executive stock options (ESOs), in which early exercise takes place whenever the underlying stock price reaches a certain upper barrier after vesting. We analyze the ESO value and the ESO exercise time to obtain their solutions in simple forms, which are consistent with principal features of early exercise, delayed vesting and random exit. For the perpetual case, these solutions are given in explicit forms and shown to be exact in the Black-ScholesMerton formulation. Using an endogenous approximation for the barrier level, we numerically compare our approximation for the ESO value with a benchmark result generated by a binomial-tree model and the quadratic approximation previously established. From numerical comparisons for some particular cases, we see that our approximations always underestimate the benchmark results and the absolute values of the relative percentage errors are less than 1% for all cases, whereas the quadratic approximations overestimate the benchmarks and the relative percentage errors are less than about 2%.
{"title":"AN APPROXIMATE BARRIER OPTION MODEL FOR VALUING EXECUTIVE STOCK OPTIONS","authors":"Toshikazu Kimura","doi":"10.15807/JORSJ.61.110","DOIUrl":"https://doi.org/10.15807/JORSJ.61.110","url":null,"abstract":"A continuous-time barrier option model is developed for valuing executive stock options (ESOs), in which early exercise takes place whenever the underlying stock price reaches a certain upper barrier after vesting. We analyze the ESO value and the ESO exercise time to obtain their solutions in simple forms, which are consistent with principal features of early exercise, delayed vesting and random exit. For the perpetual case, these solutions are given in explicit forms and shown to be exact in the Black-ScholesMerton formulation. Using an endogenous approximation for the barrier level, we numerically compare our approximation for the ESO value with a benchmark result generated by a binomial-tree model and the quadratic approximation previously established. From numerical comparisons for some particular cases, we see that our approximations always underestimate the benchmark results and the absolute values of the relative percentage errors are less than 1% for all cases, whereas the quadratic approximations overestimate the benchmarks and the relative percentage errors are less than about 2%.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"61 1","pages":"110-131"},"PeriodicalIF":0.0,"publicationDate":"2018-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.61.110","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46258809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper discusses the problem of determining whether a given plane graph is a Delaunay graph, i.e., whether it is topologically equivalent to a Delaunay triangulation. There exist theorems which characterize Delaunay graphs and yield polynomial time algorithms for the problem only by solving some linear inequality systems. A polynomial time algorithm proposed by Hodgson, Rivin and Smith solves a linear inequality system given by Rivin, which is based on sophisticated arguments about hyperbolic geometry. Independently, Hiroshima, Miyamoto and Sugihara gave another linear inequality system and a polynomial time algorithm. Although their discussion is based on primitive arguments on Euclidean geometry, their proofs are long and intricate, unfortunately. In this paper, we give a simple proof of the theorem shown by Hiroshima et al. by employing the fixed point theorem.
{"title":"CHARACTERIZING DELAUNAY GRAPHS VIA FIXED POINT THEOREM: A SIMPLE PROOF","authors":"Tomomi Matsui, Yuichiro Miyamoto","doi":"10.15807/JORSJ.61.151","DOIUrl":"https://doi.org/10.15807/JORSJ.61.151","url":null,"abstract":"This paper discusses the problem of determining whether a given plane graph is a Delaunay graph, i.e., whether it is topologically equivalent to a Delaunay triangulation. There exist theorems which characterize Delaunay graphs and yield polynomial time algorithms for the problem only by solving some linear inequality systems. A polynomial time algorithm proposed by Hodgson, Rivin and Smith solves a linear inequality system given by Rivin, which is based on sophisticated arguments about hyperbolic geometry. Independently, Hiroshima, Miyamoto and Sugihara gave another linear inequality system and a polynomial time algorithm. Although their discussion is based on primitive arguments on Euclidean geometry, their proofs are long and intricate, unfortunately. In this paper, we give a simple proof of the theorem shown by Hiroshima et al. by employing the fixed point theorem.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"61 1","pages":"151-162"},"PeriodicalIF":0.0,"publicationDate":"2018-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.61.151","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48747424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quasi-Newton methods are widely used for solving unconstrained optimization problems. However, it is difficult to apply quasi-Newton methods directly to large-scale unconstrained optimization problems, because they need the storage of memories for matrices. In order to overcome this difficulty, memoryless quasi-Newton methods were proposed. Shanno (1978) derived the memoryless BFGS method. Recently, several researchers studied the memoryless quasi-Newton method based on the symmetric rank-one formula. However existing memoryless symmetric rank-one methods do not necessarily satisfy the sufficient descent condition. In this paper, we focus on the symmetric rank-one formula based on the spectral scaling secant condition and derive a memoryless quasi-Newton method based on the formula. Moreover we show that the method always satisfies the sufficient descent condition and converges globally for general objective functions. Finally, preliminary numerical results are shown.
{"title":"A MEMORYLESS SYMMETRIC RANK-ONE METHOD WITH SUFFICIENT DESCENT PROPERTY FOR UNCONSTRAINED OPTIMIZATION","authors":"Shummin Nakayama, Yasushi Narushima, H. Yabe","doi":"10.15807/JORSJ.61.53","DOIUrl":"https://doi.org/10.15807/JORSJ.61.53","url":null,"abstract":"Quasi-Newton methods are widely used for solving unconstrained optimization problems. However, it is difficult to apply quasi-Newton methods directly to large-scale unconstrained optimization problems, because they need the storage of memories for matrices. In order to overcome this difficulty, memoryless quasi-Newton methods were proposed. Shanno (1978) derived the memoryless BFGS method. Recently, several researchers studied the memoryless quasi-Newton method based on the symmetric rank-one formula. However existing memoryless symmetric rank-one methods do not necessarily satisfy the sufficient descent condition. In this paper, we focus on the symmetric rank-one formula based on the spectral scaling secant condition and derive a memoryless quasi-Newton method based on the formula. Moreover we show that the method always satisfies the sufficient descent condition and converges globally for general objective functions. Finally, preliminary numerical results are shown.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"61 1","pages":"53-70"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.61.53","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67215664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The popular matching problem introduced by Abraham, Irving, Kavitha, and Mehlhorn is one of bipartite matching problems with one-sided preference lists. In this paper, we first propose a matroid generalization of the weighted variant of popular matchings introduced by Mestre. Then we give a characterization of weighted popular matchings in bipartite graphs with matroid constraints and one-sided preference lists containing no ties. This characterization is based on the characterization of weighted popular matchings proved by Mestre. Lastly we prove that we can decide whether a given matching is a weighted popular matching under matroid constraints in polynomial time by using our characterization.
{"title":"A characterization of weighted popular matchings under matroid constraints","authors":"Naoyuki Kamiyama","doi":"10.15807/JORSJ.61.2","DOIUrl":"https://doi.org/10.15807/JORSJ.61.2","url":null,"abstract":"The popular matching problem introduced by Abraham, Irving, Kavitha, and Mehlhorn is one of bipartite matching problems with one-sided preference lists. In this paper, we first propose a matroid generalization of the weighted variant of popular matchings introduced by Mestre. Then we give a characterization of weighted popular matchings in bipartite graphs with matroid constraints and one-sided preference lists containing no ties. This characterization is based on the characterization of weighted popular matchings proved by Mestre. Lastly we prove that we can decide whether a given matching is a weighted popular matching under matroid constraints in polynomial time by using our characterization.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"61 1","pages":"2-17"},"PeriodicalIF":0.0,"publicationDate":"2018-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.61.2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67215562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper deals with the ratio of the distances to the first and second nearest facilities. The ratio represents the reliability of facility location when the nearest facility is closed and customers are serviced by the second nearest facility. The distribution of the ratio is derived for grid and random patterns of facilities. Distance is measured as the Euclidean and rectilinear distances. The distribution shows how the ratio is distributed in a study region, and will supply building blocks for facility location models with closing of facilities. The distribution of the ratio of the road network distances is also calculated for actual facility location.
{"title":"DISTRIBUTION OF THE RATIO OF DISTANCES TO THE FIRST AND SECOND NEAREST FACILITIES","authors":"M. Miyagawa","doi":"10.15807/JORSJ.60.429","DOIUrl":"https://doi.org/10.15807/JORSJ.60.429","url":null,"abstract":"This paper deals with the ratio of the distances to the first and second nearest facilities. The ratio represents the reliability of facility location when the nearest facility is closed and customers are serviced by the second nearest facility. The distribution of the ratio is derived for grid and random patterns of facilities. Distance is measured as the Euclidean and rectilinear distances. The distribution shows how the ratio is distributed in a study region, and will supply building blocks for facility location models with closing of facilities. The distribution of the ratio of the road network distances is also calculated for actual facility location.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"60 1","pages":"429-438"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.60.429","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46792077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that the simplex method with Dantzig’s pivoting rule may require an exponential number of iterations over two highly degenerate instances. The feasible region of the first instance is a full dimensional simplex, and a single point for the second one. In addition, the entries of the constraint matrix, the right-hand-side vector, and the cost vector are {0, 1, 2}-valued. Those instances, with few vertices and small input data length, illustrate the impact of degeneracy on simplex methods.
{"title":"SMALL DEGENERATE SIMPLICES CAN BE BAD FOR SIMPLEX METHODS","authors":"S. Mizuno, Noriyoshi Sukegawa, A. Deza","doi":"10.15807/JORSJ.60.419","DOIUrl":"https://doi.org/10.15807/JORSJ.60.419","url":null,"abstract":"We show that the simplex method with Dantzig’s pivoting rule may require an exponential number of iterations over two highly degenerate instances. The feasible region of the first instance is a full dimensional simplex, and a single point for the second one. In addition, the entries of the constraint matrix, the right-hand-side vector, and the cost vector are {0, 1, 2}-valued. Those instances, with few vertices and small input data length, illustrate the impact of degeneracy on simplex methods.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"60 1","pages":"419-428"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.60.419","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41865901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes an approach to estimate an optimal software rejuvenation schedule minimizing an expected total software cost per unit time. Based on a non-parametric predictive inference (NPI) approach, we derive the upper and lower bounds of the predictive expected software cost via the predictive survival function from system failure time data, and characterize an adaptive cost-based software rejuvenation policy, from the system failure time data with a right-censored observation. In simulation experiments, it is shown that our NPI-based approach is quite useful to predict the optimal software rejuvenation time.
{"title":"AN ADAPTIVE COST-BASED SOFTWARE REJUVENATION SCHEME WITH NONPARAMETRIC PREDICTIVE INFERENCE APPROACH","authors":"K. Rinsaka, T. Dohi","doi":"10.15807/JORSJ.60.461","DOIUrl":"https://doi.org/10.15807/JORSJ.60.461","url":null,"abstract":"This paper proposes an approach to estimate an optimal software rejuvenation schedule minimizing an expected total software cost per unit time. Based on a non-parametric predictive inference (NPI) approach, we derive the upper and lower bounds of the predictive expected software cost via the predictive survival function from system failure time data, and characterize an adaptive cost-based software rejuvenation policy, from the system failure time data with a right-censored observation. In simulation experiments, it is shown that our NPI-based approach is quite useful to predict the optimal software rejuvenation time.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"60 1","pages":"461-478"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.60.461","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49656142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Latin square is a complete assignment of [n] = {1, . . . , n} to an n × n grid such that, in each row and in each column, each value in [n] appears exactly once. A symmetric Latin square (SLS ) is a Latin square that is symmetric in the matrix sense. In what we call the constrained SLS construction (CSLSC ) problem, we are given a subset F of [n] and are asked to construct an SLS so that, whenever (i, j, k) ∈ F , the symbol k is not assigned to the cell (i, j). This paper has two contributions for this problem. One is proposal of an efficient local search algorithm for the maximization version of the problem. The maximization problem asks to fill as many cells with symbols as possible under the constraint on F . In our local search, the neighborhood is defined by p-swap, i.e., dropping exactly p symbols and then assigning any number of symbols to empty cells. For p ∈ {1, 2}, our neighborhood search algorithm finds an improved solution or concludes that no such solution exists in O(n) time. The other contribution is to show its practical value for the CSLSC problem. For randomly generated instances, our iterated local search algorithm frequently constructs a larger partial SLS than state-of-the-art solvers such as IBM ILOG CPLEX, LocalSolver and WCSP.
{"title":"AN EFFICIENT LOCAL SEARCH FOR THE CONSTRAINED SYMMETRIC LATIN SQUARE CONSTRUCTION PROBLEM","authors":"Kazuya Haraguchi","doi":"10.15807/JORSJ.60.439","DOIUrl":"https://doi.org/10.15807/JORSJ.60.439","url":null,"abstract":"A Latin square is a complete assignment of [n] = {1, . . . , n} to an n × n grid such that, in each row and in each column, each value in [n] appears exactly once. A symmetric Latin square (SLS ) is a Latin square that is symmetric in the matrix sense. In what we call the constrained SLS construction (CSLSC ) problem, we are given a subset F of [n] and are asked to construct an SLS so that, whenever (i, j, k) ∈ F , the symbol k is not assigned to the cell (i, j). This paper has two contributions for this problem. One is proposal of an efficient local search algorithm for the maximization version of the problem. The maximization problem asks to fill as many cells with symbols as possible under the constraint on F . In our local search, the neighborhood is defined by p-swap, i.e., dropping exactly p symbols and then assigning any number of symbols to empty cells. For p ∈ {1, 2}, our neighborhood search algorithm finds an improved solution or concludes that no such solution exists in O(n) time. The other contribution is to show its practical value for the CSLSC problem. For randomly generated instances, our iterated local search algorithm frequently constructs a larger partial SLS than state-of-the-art solvers such as IBM ILOG CPLEX, LocalSolver and WCSP.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"60 1","pages":"439-460"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.60.439","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43770975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}