Recently, a seating position can be often selected when a plane or bullet train ticket is reserved. Specially, for theater and stadium, it is important to decide how to assign reservations to seats. This paper proposes a dynamic model where seats’ resources are located at a single line with considering seats position that have already been assigned. An analysis has been conducted and the results show that, 1) optimal policy for an arriving request is to allocate it to one side of the edges of the adjacent vacancies, 2) if all of the resources are vacant at beginning time for booking, then the model corresponds to a single-leg model with multiple seat bookings and single fare class in Lee and Hersh (1993), 3) it is not necessarily optimal that a request is allocated to the less adjacent seats’ vacancy. Finally, this paper proposes an algorithm to solve the optimal policy using above results and conducts numerical examples.
{"title":"A DYNAMIC MODEL WITH RESOURCES PLACED ON SINGLE LINE IN REVENUE MANAGEMENT","authors":"Yu Ogasawara","doi":"10.15807/JORSJ.60.91","DOIUrl":"https://doi.org/10.15807/JORSJ.60.91","url":null,"abstract":"Recently, a seating position can be often selected when a plane or bullet train ticket is reserved. Specially, for theater and stadium, it is important to decide how to assign reservations to seats. This paper proposes a dynamic model where seats’ resources are located at a single line with considering seats position that have already been assigned. An analysis has been conducted and the results show that, 1) optimal policy for an arriving request is to allocate it to one side of the edges of the adjacent vacancies, 2) if all of the resources are vacant at beginning time for booking, then the model corresponds to a single-leg model with multiple seat bookings and single fare class in Lee and Hersh (1993), 3) it is not necessarily optimal that a request is allocated to the less adjacent seats’ vacancy. Finally, this paper proposes an algorithm to solve the optimal policy using above results and conducts numerical examples.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"60 1","pages":"91-100"},"PeriodicalIF":0.0,"publicationDate":"2017-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.60.91","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45765946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this study is to consider the problem of finding a guaranteed way of winning a certain two-player combinatorial game of perfect knowledge from the standpoint of mutually dependent decision processes (MDDPs). Our MDDP model comprises two one-stage deterministic decision processes. Each decision process expresses every turn of a player. We analyze a MDDP problem in which the length of turns taken by a player is minimized, allowing him to win regardless of the decisions made by his opponent. The model provides a formulation for finding the shortest guaranteed strategy. Although computational complexity remains, the concept introduced in this paper can also be applied to other two-player combinatorial games of perfect knowledge.
{"title":"SURE WAY TO WIN A GAME USING A MUTUALLY DEPENDENT DECISION PROCESS MODEL","authors":"Toshiharu Fujita","doi":"10.15807/JORSJ.60.110","DOIUrl":"https://doi.org/10.15807/JORSJ.60.110","url":null,"abstract":"The purpose of this study is to consider the problem of finding a guaranteed way of winning a certain two-player combinatorial game of perfect knowledge from the standpoint of mutually dependent decision processes (MDDPs). Our MDDP model comprises two one-stage deterministic decision processes. Each decision process expresses every turn of a player. We analyze a MDDP problem in which the length of turns taken by a player is minimized, allowing him to win regardless of the decisions made by his opponent. The model provides a formulation for finding the shortest guaranteed strategy. Although computational complexity remains, the concept introduced in this paper can also be applied to other two-player combinatorial games of perfect knowledge.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"60 1","pages":"110-121"},"PeriodicalIF":0.0,"publicationDate":"2017-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.60.110","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48118801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Gerber-Shiu function provides a way of measuring the risk of an insurance company. It is given by the expected value of a function that depends on the ruin time, the deficit at ruin, and the surplus prior to ruin. Its computation requires the evaluation of the overshoot/undershoot distributions of the surplus process at ruin. In this paper, we use the recent developments of the fluctuation theory and approximate it in a closed form by fitting the underlying process by phase-type Levy processes. A sequence of numerical results are given.
{"title":"PHASE-TYPE APPROXIMATION OF THE GERBER-SHIU FUNCTION","authors":"K. Yamazaki","doi":"10.15807/JORSJ.60.337","DOIUrl":"https://doi.org/10.15807/JORSJ.60.337","url":null,"abstract":"The Gerber-Shiu function provides a way of measuring the risk of an insurance company. It is given by the expected value of a function that depends on the ruin time, the deficit at ruin, and the surplus prior to ruin. Its computation requires the evaluation of the overshoot/undershoot distributions of the surplus process at ruin. In this paper, we use the recent developments of the fluctuation theory and approximate it in a closed form by fitting the underlying process by phase-type Levy processes. A sequence of numerical results are given.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"60 1","pages":"337-352"},"PeriodicalIF":0.0,"publicationDate":"2017-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.60.337","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47639276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L-convexity is a concept of discrete convexity for functions de(cid:12)ned on the integer lattice points, and plays a central role in the framework of discrete convex analysis. In this paper, we review recent development of algorithms for L-convex function minimization. We (cid:12)rst point out the close connection between discrete convex analysis and various research (cid:12)elds such as discrete optimization, auction theory, and computer vision by showing that algorithms proposed independently in these research (cid:12)elds can be regarded as minimization algorithms applied to speci(cid:12)c L-convex functions. Therefore, we can provide a uni(cid:12)ed approach to analyze the algorithms appearing in various research (cid:12)elds through the concept of L-convex function. We then present the recent results on theoretical bounds of the number of iterations required by some minimization algorithms, where precise bounds are given in terms of distance between the initial solution and the minimizer found by the algorithms. From these results we see that the algorithms output the nearest" minimizer to the initial solution, and that the trajectories of solutions generated by the algorithms are shortest paths" from the initial solution to the found minimizer. Finally, we consider an application of the results to iterative auctions in auction theory. We point out that the essence of the iterative auctions proposed by Ausubel (2006) lies in L-convexity. We also present new iterative auctions by Murota{Shioura{Yang (2016), which are based on the understanding of existing iterative auctions from the viewpoint of discrete convex analysis.
l -凸性是函数de(cid:12)在整数格点上的离散凸性的概念,在离散凸分析的框架中起着核心作用。本文综述了l -凸函数最小化算法的最新进展。我们(cid:12)首先指出离散凸分析与各种研究领域(cid:12)之间的密切联系,如离散优化、拍卖理论和计算机视觉,表明在这些研究领域(cid:12)中独立提出的算法可以被视为应用于特定(cid:12)c -凸函数的最小化算法。因此,我们可以通过l -凸函数的概念提供一种统一(cid:12)的方法来分析各种研究(cid:12)领域中出现的算法。然后,我们给出了一些最小化算法所需迭代次数的理论边界的最新结果,其中精确的边界是根据算法找到的初始解和最小化器之间的距离给出的。从这些结果中我们看到,算法输出的“最接近”最小值到初始解,并且算法生成的解的轨迹是从初始解到找到的最小值的“最短路径”。最后,我们考虑了拍卖理论中迭代拍卖的应用。我们指出Ausubel(2006)提出的迭代拍卖的本质在于l -凸性。我们还介绍了Murota{Shioura{Yang(2016)的新迭代拍卖,它基于从离散凸分析的角度对现有迭代拍卖的理解。
{"title":"ALGORITHMS FOR L-CONVEX FUNCTION MINIMIZATION: CONNECTION BETWEEN DISCRETE CONVEX ANALYSIS AND OTHER RESEARCH FIELDS","authors":"A. Shioura","doi":"10.15807/JORSJ.60.216","DOIUrl":"https://doi.org/10.15807/JORSJ.60.216","url":null,"abstract":"L-convexity is a concept of discrete convexity for functions de(cid:12)ned on the integer lattice points, and plays a central role in the framework of discrete convex analysis. In this paper, we review recent development of algorithms for L-convex function minimization. We (cid:12)rst point out the close connection between discrete convex analysis and various research (cid:12)elds such as discrete optimization, auction theory, and computer vision by showing that algorithms proposed independently in these research (cid:12)elds can be regarded as minimization algorithms applied to speci(cid:12)c L-convex functions. Therefore, we can provide a uni(cid:12)ed approach to analyze the algorithms appearing in various research (cid:12)elds through the concept of L-convex function. We then present the recent results on theoretical bounds of the number of iterations required by some minimization algorithms, where precise bounds are given in terms of distance between the initial solution and the minimizer found by the algorithms. From these results we see that the algorithms output the nearest\" minimizer to the initial solution, and that the trajectories of solutions generated by the algorithms are shortest paths\" from the initial solution to the found minimizer. Finally, we consider an application of the results to iterative auctions in auction theory. We point out that the essence of the iterative auctions proposed by Ausubel (2006) lies in L-convexity. We also present new iterative auctions by Murota{Shioura{Yang (2016), which are based on the understanding of existing iterative auctions from the viewpoint of discrete convex analysis.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"60 1","pages":"216-243"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.60.216","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67215533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modularity proposed by Newman and Girvan is the most commonly used measure when the nodes of a graph are grouped into communities consisting of tightly connected nodes. We formulate the modularity maximization problem as a set partitioning problem, and propose an algorithm for the problem based on the linear programming relaxation. We solve the dual of the linear programming relaxation by using a cutting plane method. To mediate the slow convergence that cutting plane methods usually suffer, we propose a method for finding and simultaneously adding multiple cutting planes.
{"title":"A CUTTING PLANE ALGORITHM FOR MODULARITY MAXIMIZATION PROBLEM","authors":"Yoichi Izunaga, Y. Yamamoto","doi":"10.15807/JORSJ.60.24","DOIUrl":"https://doi.org/10.15807/JORSJ.60.24","url":null,"abstract":"Modularity proposed by Newman and Girvan is the most commonly used measure when the nodes of a graph are grouped into communities consisting of tightly connected nodes. We formulate the modularity maximization problem as a set partitioning problem, and propose an algorithm for the problem based on the linear programming relaxation. We solve the dual of the linear programming relaxation by using a cutting plane method. To mediate the slow convergence that cutting plane methods usually suffer, we propose a method for finding and simultaneously adding multiple cutting planes.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"60 1","pages":"24-42"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.60.24","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67215181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carnes and Shmoys [2] presented a 2-approximation algorithm for the minimum knapsack problem. We extend their algorithm to the minimum knapsack problem with a forcing graph (MKPFG), which has a forcing constraint for each edge in the graph. The forcing constraint means that at least one item (vertex) of the edge must be packed in the knapsack. The problem is strongly NP-hard, since it includes the vertex cover problem as a special case. Generalizing the proposed algorithm, we also present an approximation algorithm for the covering integer program with 0-1 variables.
{"title":"A 2-APPROXIMATION ALGORITHM FOR THE MINIMUM KNAPSACK PROBLEM WITH A FORCING GRAPH","authors":"Yotaro Takazawa, S. Mizuno","doi":"10.15807/JORSJ.60.15","DOIUrl":"https://doi.org/10.15807/JORSJ.60.15","url":null,"abstract":"Carnes and Shmoys [2] presented a 2-approximation algorithm for the minimum knapsack problem. We extend their algorithm to the minimum knapsack problem with a forcing graph (MKPFG), which has a forcing constraint for each edge in the graph. The forcing constraint means that at least one item (vertex) of the edge must be packed in the knapsack. The problem is strongly NP-hard, since it includes the vertex cover problem as a special case. Generalizing the proposed algorithm, we also present an approximation algorithm for the covering integer program with 0-1 variables.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"60 1","pages":"15-23"},"PeriodicalIF":0.0,"publicationDate":"2017-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.60.15","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67214731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A trend renewal process is characterized by a counting process and a renewal process which are mutually transformed with each other by a trend function, and plays a significant role to represent a sub-class of general repair models. In this paper we develop another nonparametric estimation method for trend renewal processes, where the form of failure rate function in the renewal process is unknown. It is regarded as a dual approach for the nonparametric monotone maximum likelihood estimator by Heggland and Lindqvist (2007) and complements their result under the assumption that the form of trend (intensity) function is unknown. We validate our nonparametric estimator through simulation experiments and apply to a field data analysis of a repairable system.
{"title":"ANOTHER LOOK AT NONPARAMETRIC ESTIMATION FOR TREND RENEWAL PROCESSES","authors":"Yasuhiro Saito, T. Dohi","doi":"10.15807/JORSJ.59.312","DOIUrl":"https://doi.org/10.15807/JORSJ.59.312","url":null,"abstract":"A trend renewal process is characterized by a counting process and a renewal process which are mutually transformed with each other by a trend function, and plays a significant role to represent a sub-class of general repair models. In this paper we develop another nonparametric estimation method for trend renewal processes, where the form of failure rate function in the renewal process is unknown. It is regarded as a dual approach for the nonparametric monotone maximum likelihood estimator by Heggland and Lindqvist (2007) and complements their result under the assumption that the form of trend (intensity) function is unknown. We validate our nonparametric estimator through simulation experiments and apply to a field data analysis of a repairable system.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"59 1","pages":"312-333"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.59.312","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67215150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linear consecutive- k -out-of- n : F systems are considered. It is assumed that the components are independent and the component failure times follow an exponential distribution with identical failure rate. It is also assumed that there are only two component states (working and failed) and we can know the component state at any time. If there is at least one minimal cut set consisting of one working component, the system will be preventively maintained after a certain time interval. If the system fails before reaching the preventive maintenance (PM) time, the failed components are replaced by the new ones. The optimal PM interval time which minimizes the expected cost rate is obtained. The performance of the proposed policy is evaluated by comparing the expected cost rate of the proposed policy with those of corrective maintenance (CM) and age PM policy.
考虑线性连续- k -out- n: F系统。假设各部件是独立的,各部件的失效次数服从相同故障率的指数分布。同时假设组件只有两种状态(工作状态和故障状态),我们可以随时知道组件的状态。如果至少有一个由一个工作部件组成的最小切割集,则系统将在一定的时间间隔后进行预防性维护。如果在巡检时间前系统出现故障,则更换故障部件。得到了使期望成本率最小的最优维修间隔时间。通过将建议策略的预期成本率与纠正维护(CM)和老化PM策略的预期成本率进行比较,可以评估建议策略的性能。
{"title":"PREVENTIVE MAINTENANCE POLICY FOR LINEAR CONSECUTIVE-K-OUT-OF-N: F SYSTEM","authors":"A. Endharta, W. Yun, Hisashi Yamamoto","doi":"10.15807/JORSJ.59.334","DOIUrl":"https://doi.org/10.15807/JORSJ.59.334","url":null,"abstract":"Linear consecutive- k -out-of- n : F systems are considered. It is assumed that the components are independent and the component failure times follow an exponential distribution with identical failure rate. It is also assumed that there are only two component states (working and failed) and we can know the component state at any time. If there is at least one minimal cut set consisting of one working component, the system will be preventively maintained after a certain time interval. If the system fails before reaching the preventive maintenance (PM) time, the failed components are replaced by the new ones. The optimal PM interval time which minimizes the expected cost rate is obtained. The performance of the proposed policy is evaluated by comparing the expected cost rate of the proposed policy with those of corrective maintenance (CM) and age PM policy.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"59 1","pages":"334-346"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.59.334","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67214844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we consider an operational software system with multi-stage degradation levels due to software aging, and derive the optimal dynamic software rejuvenation policy maximizing the steady-state system availability, via the semi-Markov decision process. Also, we develop a reinforcement learning algorithm based on Q-learning as an on-line adaptive nonparametric estimation scheme without the knowledge of transition rate to each degradation level. In numerical examples, we present how to derive the optimal software rejuvenation policy with the decision table, and investigate the asymptotic behavior of estimates of the optimal software rejuvenation policy with the reinforcement learning.
{"title":"DYNAMIC SOFTWARE AVAILABILITY MODEL WITH REJUVENATION","authors":"T. Dohi, H. Okamura","doi":"10.15807/JORSJ.59.270","DOIUrl":"https://doi.org/10.15807/JORSJ.59.270","url":null,"abstract":"In this paper we consider an operational software system with multi-stage degradation levels due to software aging, and derive the optimal dynamic software rejuvenation policy maximizing the steady-state system availability, via the semi-Markov decision process. Also, we develop a reinforcement learning algorithm based on Q-learning as an on-line adaptive nonparametric estimation scheme without the knowledge of transition rate to each degradation level. In numerical examples, we present how to derive the optimal software rejuvenation policy with the decision table, and investigate the asymptotic behavior of estimates of the optimal software rejuvenation policy with the reinforcement learning.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"59 1","pages":"270-290"},"PeriodicalIF":0.0,"publicationDate":"2016-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.59.270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67214884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Hirai, H. Masuyama, S. Kasahara, Yutaka Takahashi
In cloud computing, a large-scale parallel-distributed processing service is provided in which a huge task is split into a number of subtasks, which are processed independently on a cluster of machines referred to as workers. Those workers that take longer to process their assigned subtasks result in the processing delay of the task (the issue of stragglers). An efficient way to address this issue is for other workers to execute the troubled subtasks for backup purposes (task replication). In this paper, we evaluate the efficiency of task replication from a theoretical point of view. The mean value and standard deviation of the task-processing time are derived approximately using extreme value theory, while the mean total processing time is evaluated exactly, for cases in which the worker-processing time follows a hyper-exponential, Weibull, or Pareto distribution. The numerical results reveal that the efficiency of task replication depends significantly on the tail of the worker-processing time distribution. In addition, the optimal number of replications which achieves the shortest task-processing time mainly depends on the coefficient of variation of the worker-processing time. Furthermore, three replications are effective to guarantee a low variance of the task-processing time, regardless of the tail.
{"title":"PERFORMANCE ANALYSIS OF TASK REPLICATION IN LARGE-SCALE PARALLEL-DISTRIBUTED PROCESSING : AN EXTREME VALUE THEORY APPROACH","authors":"T. Hirai, H. Masuyama, S. Kasahara, Yutaka Takahashi","doi":"10.15807/JORSJ.59.174","DOIUrl":"https://doi.org/10.15807/JORSJ.59.174","url":null,"abstract":"In cloud computing, a large-scale parallel-distributed processing service is provided in which a huge task is split into a number of subtasks, which are processed independently on a cluster of machines referred to as workers. Those workers that take longer to process their assigned subtasks result in the processing delay of the task (the issue of stragglers). An efficient way to address this issue is for other workers to execute the troubled subtasks for backup purposes (task replication). In this paper, we evaluate the efficiency of task replication from a theoretical point of view. The mean value and standard deviation of the task-processing time are derived approximately using extreme value theory, while the mean total processing time is evaluated exactly, for cases in which the worker-processing time follows a hyper-exponential, Weibull, or Pareto distribution. The numerical results reveal that the efficiency of task replication depends significantly on the tail of the worker-processing time distribution. In addition, the optimal number of replications which achieves the shortest task-processing time mainly depends on the coefficient of variation of the worker-processing time. Furthermore, three replications are effective to guarantee a low variance of the task-processing time, regardless of the tail.","PeriodicalId":51107,"journal":{"name":"Journal of the Operations Research Society of Japan","volume":"59 1","pages":"174-194"},"PeriodicalIF":0.0,"publicationDate":"2016-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.15807/JORSJ.59.174","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"67214444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}