Evolutionary models with persistent randomness employ stochastic stability as a solution concept to identify more reasonable outcomes in games with multiple equilibria. The complexity of computational methods used to identify stochastically stable outcomes and their lack of robustness to the interaction structure limit the applicability of evolutionary selection theories. This paper identifies p-dominance and contagion threshold as the properties of strategies and interaction structure respectively that robustly determine stochastically stable outcomes. Specifically, we show that p-dominant strategies, which are best responses to any distribution that assigns them a weight of at least p, are stochastically stable in networks with contagion threshold of at least p.
{"title":"On the Relationship between p-Dominance and Stochastic Stability in Network Games","authors":"Daniel C. Opolot","doi":"10.2139/ssrn.3234959","DOIUrl":"https://doi.org/10.2139/ssrn.3234959","url":null,"abstract":"Evolutionary models with persistent randomness employ stochastic stability as a solution concept to identify more reasonable outcomes in games with multiple equilibria. The complexity of computational methods used to identify stochastically stable outcomes and their lack of robustness to the interaction structure limit the applicability of evolutionary selection theories. This paper identifies p-dominance and contagion threshold as the properties of strategies and interaction structure respectively that robustly determine stochastically stable outcomes. Specifically, we show that p-dominant strategies, which are best responses to any distribution that assigns them a weight of at least p, are stochastically stable in networks with contagion threshold of at least p.","PeriodicalId":275253,"journal":{"name":"Operations Research eJournal","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127097663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies a multi-product newsvendor problem with customer-driven demand substitution, where each product, once run out of stock, can be proportionally substituted by the others. This problem has been widely studied in the literature, however, due to nonconvexity and intractability, only limited analytical properties have been reported and no efficient approaches have been proposed. This paper first completely characterizes the optimal order policy when the demand is known and reformulates this nonconvex problem as a binary quadratic program. When the demand is random, we formulate the problem as a two-stage stochastic integer program, derive several necessary optimality conditions, prove the submodularity of the profit function, and also develop polynomial-time approximation algorithms and show their performance guarantees. We further propose a tight upper bound via nonanticipativity dual, which is proven to be very close to the optimal value and can yield a good-quality feasible solution under a mild condition. Our numerical investigation demonstrates effectiveness of the proposed algorithms. Moreover, several useful findings and managerial insights are revealed from a series of sensitivity analyses.
{"title":"Multi-Product Newsvendor Problem with Customer-Driven Demand Substitution: A Stochastic Integer Program Perspective","authors":"Jie Zhang, Weijun Xie, S. Sarin","doi":"10.2139/ssrn.3188361","DOIUrl":"https://doi.org/10.2139/ssrn.3188361","url":null,"abstract":"This paper studies a multi-product newsvendor problem with customer-driven demand substitution, where each product, once run out of stock, can be proportionally substituted by the others. This problem has been widely studied in the literature, however, due to nonconvexity and intractability, only limited analytical properties have been reported and no efficient approaches have been proposed. This paper first completely characterizes the optimal order policy when the demand is known and reformulates this nonconvex problem as a binary quadratic program. When the demand is random, we formulate the problem as a two-stage stochastic integer program, derive several necessary optimality conditions, prove the submodularity of the profit function, and also develop polynomial-time approximation algorithms and show their performance guarantees. We further propose a tight upper bound via nonanticipativity dual, which is proven to be very close to the optimal value and can yield a good-quality feasible solution under a mild condition. Our numerical investigation demonstrates effectiveness of the proposed algorithms. Moreover, several useful findings and managerial insights are revealed from a series of sensitivity analyses.","PeriodicalId":275253,"journal":{"name":"Operations Research eJournal","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116326510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a firm that designs a menu of vertically differentiated products for a population of customers with heterogeneous quality sensitivities. The firm faces an uncertainty about production costs, and we formulate this uncertainty as a belief distribution on a set of cost models. Over a time horizon of T periods, the firm can dynamically adjust its menu and make noisy observations on the underlying cost model through customers’ purchasing decisions. We characterize how optimal product differentiation depends on the “informativeness” of quality choices, formally measured by a contrast-to-noise ratio defined on the firm’s feasible quality set. We prove that, if there exist informative quality choices, then the optimal product differentiation policy improves the product quality to accelerate information accumulation and exercises the most extreme experimentation on less quality-sensitive customers. We design a minimum quality standard (MQS) policy that mimics the aforementioned features of the optimal policy and show that the MQS policy is near-optimal. We also prove that, if there exists a certain continuum of informative quality choices, then even a myopic policy that makes no attempt to learn exhibits near-optimal profit performance. This stands in stark contrast to the poor performance of myopic policies in pricing and learning problems in the absence of product differentiation.
{"title":"Dynamic Selling Mechanisms for Product Differentiation and Learning","authors":"N. B. Keskin, J. Birge","doi":"10.2139/ssrn.2530261","DOIUrl":"https://doi.org/10.2139/ssrn.2530261","url":null,"abstract":"We consider a firm that designs a menu of vertically differentiated products for a population of customers with heterogeneous quality sensitivities. The firm faces an uncertainty about production costs, and we formulate this uncertainty as a belief distribution on a set of cost models. Over a time horizon of T periods, the firm can dynamically adjust its menu and make noisy observations on the underlying cost model through customers’ purchasing decisions. We characterize how optimal product differentiation depends on the “informativeness” of quality choices, formally measured by a contrast-to-noise ratio defined on the firm’s feasible quality set. We prove that, if there exist informative quality choices, then the optimal product differentiation policy improves the product quality to accelerate information accumulation and exercises the most extreme experimentation on less quality-sensitive customers. We design a minimum quality standard (MQS) policy that mimics the aforementioned features of the optimal policy and show that the MQS policy is near-optimal. We also prove that, if there exists a certain continuum of informative quality choices, then even a myopic policy that makes no attempt to learn exhibits near-optimal profit performance. This stands in stark contrast to the poor performance of myopic policies in pricing and learning problems in the absence of product differentiation.","PeriodicalId":275253,"journal":{"name":"Operations Research eJournal","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122576181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, Batabyal and Yoo (2018) have analyzed Schumpeterian competition in a region that is creative a la Richard Florida and where the creative class is made up of existing and candidate entrepreneurs. These researchers assume that an existing entrepreneur has a fully enforced patent on the inputs or machines that he has produced. We dispense with this assumption and study a scenario in which there is no patent protection for the representative existing entrepreneur (REE). This REE can undertake two possible types of innovation at the same cost. The first (second) type of innovation is general (specific) and hence can (cannot) be copied by the so called candidate entrepreneurs. In this setting, we perform two tasks. First, we show that although the REE will never undertake the general innovation, he may undertake the specific innovation. Second, we point out that even though the general innovation is not undertaken, the value to the creative region from the general innovation exceeds that from the specific innovation.
{"title":"Creative Class Competition and Innovation in the Absence of patent Protection","authors":"A. Batabyal","doi":"10.2139/ssrn.3269958","DOIUrl":"https://doi.org/10.2139/ssrn.3269958","url":null,"abstract":"Recently, Batabyal and Yoo (2018) have analyzed Schumpeterian competition in a region that is creative a la Richard Florida and where the creative class is made up of existing and candidate entrepreneurs. These researchers assume that an existing entrepreneur has a fully enforced patent on the inputs or machines that he has produced. We dispense with this assumption and study a scenario in which there is no patent protection for the representative existing entrepreneur (REE). This REE can undertake two possible types of innovation at the same cost. The first (second) type of innovation is general (specific) and hence can (cannot) be copied by the so called candidate entrepreneurs. In this setting, we perform two tasks. First, we show that although the REE will never undertake the general innovation, he may undertake the specific innovation. Second, we point out that even though the general innovation is not undertaken, the value to the creative region from the general innovation exceeds that from the specific innovation.","PeriodicalId":275253,"journal":{"name":"Operations Research eJournal","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127455233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: We study the practice-motivated problem of dynamically procuring a new, short-life-cycle product under demand uncertainty. The firm does not know the demand for the new product ...
问题定义:研究需求不确定条件下动态采购短生命周期新产品的实践驱动问题。该公司不知道新产品的需求。
{"title":"Dynamic Procurement of New Products with Covariate Information: The Residual Tree Method","authors":"Gah-Yi Ban, Jérémie Gallien, A. Mersereau","doi":"10.2139/ssrn.2926028","DOIUrl":"https://doi.org/10.2139/ssrn.2926028","url":null,"abstract":"Problem definition: We study the practice-motivated problem of dynamically procuring a new, short-life-cycle product under demand uncertainty. The firm does not know the demand for the new product ...","PeriodicalId":275253,"journal":{"name":"Operations Research eJournal","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128513691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider an inventory control problem with multiple products and stockout substitution. The firm knows neither the primary demand distribution for each product nor the customers’ substitution probabilities between products a priori, and it needs to learn such information from sales data on the fly. One challenge in this problem is that the firm cannot distinguish between primary demand and substitution (overflow) demand from the sales data of any product, and lost sales are not observable. To circumvent these difficulties, we construct learning stages with each stage consisting of a cyclic exploration scheme and a benchmark exploration interval. The benchmark interval allows us to isolate the primary demand information from the sales data, and then this information is used against the sales data from the cyclic exploration intervals to estimate substitution probabilities. Because raising the inventory level helps obtain primary demand information but hinders substitution demand information, inventory decisions have to be carefully balanced to learn them together. We show that our learning algorithm admits a worst-case regret rate that (almost) matches the theoretical lower bound, and numerical experiments demonstrate that the algorithm performs very well. This paper was accepted by J. George Shanthikumar, big data analytics.
考虑一个具有多产品和缺货替代的库存控制问题。公司既不知道每种产品的主要需求分布,也不知道顾客在产品之间的先验替代概率,它需要从动态的销售数据中学习这些信息。这个问题的一个挑战是,企业无法从任何产品的销售数据中区分主要需求和替代(溢出)需求,并且无法观察到损失的销售。为了克服这些困难,我们构建了学习阶段,每个阶段由一个循环勘探方案和一个基准勘探间隔组成。基准区间允许我们将主要需求信息从销售数据中分离出来,然后将该信息与来自循环勘探区间的销售数据相比较,以估计替代概率。由于提高库存水平有助于获得主要需求信息,但阻碍了替代需求信息,因此库存决策必须仔细平衡以同时了解它们。我们表明,我们的学习算法承认最坏情况的后悔率(几乎)匹配理论下界,数值实验表明,该算法执行得很好。本文被大数据分析J. George Shanthikumar接受。
{"title":"Dynamic Inventory Control with Stockout Substitution and Demand Learning","authors":"Boxiao Chen, X. Chao","doi":"10.2139/ssrn.3157345","DOIUrl":"https://doi.org/10.2139/ssrn.3157345","url":null,"abstract":"We consider an inventory control problem with multiple products and stockout substitution. The firm knows neither the primary demand distribution for each product nor the customers’ substitution probabilities between products a priori, and it needs to learn such information from sales data on the fly. One challenge in this problem is that the firm cannot distinguish between primary demand and substitution (overflow) demand from the sales data of any product, and lost sales are not observable. To circumvent these difficulties, we construct learning stages with each stage consisting of a cyclic exploration scheme and a benchmark exploration interval. The benchmark interval allows us to isolate the primary demand information from the sales data, and then this information is used against the sales data from the cyclic exploration intervals to estimate substitution probabilities. Because raising the inventory level helps obtain primary demand information but hinders substitution demand information, inventory decisions have to be carefully balanced to learn them together. We show that our learning algorithm admits a worst-case regret rate that (almost) matches the theoretical lower bound, and numerical experiments demonstrate that the algorithm performs very well. This paper was accepted by J. George Shanthikumar, big data analytics.","PeriodicalId":275253,"journal":{"name":"Operations Research eJournal","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125383048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current electricity markets do not efficiently achieve policy targets i.e., sustainability, reliability, and price efficiency. Thus, there are debates on how to achieve these targets by using either market mechanisms e.g., carbon and capacity markets, or non-market mechanisms such as offer-caps, price-caps, and market-monitoring. At the same time, major industry changes including demand response management technologies and large scale batteries bring more elasticity to demand; such changes will impact the methodology needed to achieve the above mentioned targets. This work provides market solutions that capture all three policy targets simultaneously and take into account the above-mentioned industry changes. The proposed solutions are based on: (i) a model of electricity markets that captures all the above mentioned electricity policy targets; (ii) mechanism design and the development of a framework for design of efficient auctions with constraints (individual, joint homogeneous, and joint non-homogeneous). The results show that, within the context of the proposed model, all policy targets can be achieved efficiently by separate capacity and carbon markets in addition to efficient spot markets. The results also highlight that all three policy targets can be achieved without any offer-cap, price-cap, or market monitoring. Thus, within the context of the proposed model, they provide clear answers to the above-mentioned policy debates.
{"title":"Economizing the Uneconomic: Markets for Reliable, Sustainable, and Price Efficient Electricity","authors":"M. Rasouli, D. Teneketzis","doi":"10.2139/ssrn.3140080","DOIUrl":"https://doi.org/10.2139/ssrn.3140080","url":null,"abstract":"Current electricity markets do not efficiently achieve policy targets i.e., sustainability, reliability, and price efficiency. Thus, there are debates on how to achieve these targets by using either market mechanisms e.g., carbon and capacity markets, or non-market mechanisms such as offer-caps, price-caps, and market-monitoring. At the same time, major industry changes including demand response management technologies and large scale batteries bring more elasticity to demand; such changes will impact the methodology needed to achieve the above mentioned targets. This work provides market solutions that capture all three policy targets simultaneously and take into account the above-mentioned industry changes. The proposed solutions are based on: (i) a model of electricity markets that captures all the above mentioned electricity policy targets; (ii) mechanism design and the development of a framework for design of efficient auctions with constraints (individual, joint homogeneous, and joint non-homogeneous). The results show that, within the context of the proposed model, all policy targets can be achieved efficiently by separate capacity and carbon markets in addition to efficient spot markets. The results also highlight that all three policy targets can be achieved without any offer-cap, price-cap, or market monitoring. Thus, within the context of the proposed model, they provide clear answers to the above-mentioned policy debates.","PeriodicalId":275253,"journal":{"name":"Operations Research eJournal","volume":"10 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134244960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shenghai Zhou, Yichuan Ding, W. T. Huh, Guohua Wan
We consider the appointment scheduling problem, which determines the job allowance over the planning horizon. In particular, we study a simple but effective scheduling policy -- the so-called plateau policy, which allocates a constant job allowance for each appointment. Prior studies on appointment scheduling suggests a "dome" shape structure for the optimal job allowance over the planning horizon. This implies that job allowance does not vary significantly in the middle of the schedule sequence, but varies at the beginning and also at the end of the optimal schedule. Using a dynamic programming formulation, we derive an explicit performance gap between the plateau policy and the optimal schedule, and examine how this gap behaves as the number of appointments increases. We show that a plateau policy is asymptotically optimal when the number of appointments increases. We extend this result to a more general setting with multiple service types. Numerical experiments show that the plateau policy is near optimal even for a small number of appointments, which complements the theoretical results that we derived. Our result provides a justification and strong support for the plateau policy, which is commonly used in practice. Moreover, with minor modifications, the plateau policy can be adapted to more general scenarios with patient no-shows or heterogeneous appointment types.
{"title":"Constant Job-Allowance Policies for Appointment Scheduling: Performance Bounds and Numerical Analysis","authors":"Shenghai Zhou, Yichuan Ding, W. T. Huh, Guohua Wan","doi":"10.2139/ssrn.3133508","DOIUrl":"https://doi.org/10.2139/ssrn.3133508","url":null,"abstract":"We consider the appointment scheduling problem, which determines the job allowance over the planning horizon. In particular, we study a simple but effective scheduling policy -- the so-called plateau policy, which allocates a constant job allowance for each appointment. Prior studies on appointment scheduling suggests a \"dome\" shape structure for the optimal job allowance over the planning horizon. This implies that job allowance does not vary significantly in the middle of the schedule sequence, but varies at the beginning and also at the end of the optimal schedule. Using a dynamic programming formulation, we derive an explicit performance gap between the plateau policy and the optimal schedule, and examine how this gap behaves as the number of appointments increases. We show that a plateau policy is asymptotically optimal when the number of appointments increases. We extend this result to a more general setting with multiple service types. Numerical experiments show that the plateau policy is near optimal even for a small number of appointments, which complements the theoretical results that we derived. Our result provides a justification and strong support for the plateau policy, which is commonly used in practice. Moreover, with minor modifications, the plateau policy can be adapted to more general scenarios with patient no-shows or heterogeneous appointment types.","PeriodicalId":275253,"journal":{"name":"Operations Research eJournal","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122324232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper provides evidence that in an emergency department (ED), the arrival of an additional low-acuity patient substantially increases the wait time to start of treatment for high-acuity patients. This contradicts a long-standing prior conclusion in the medical literature that this effect is "negligible". The prior methodology underestimates the effect by neglecting how delays are propagated in queueing systems. In contrast, this paper develops and validates a new estimation method based on queueing theory, machine learning, and causal inference. Wait time information displayed to low-acuity patients provide a quasi-randomized instrumental variable, and is used to correct for omitted variable bias. Through a combination of empirical and queueing theoretic analyses, this paper identifies the two primary mechanisms by which a low-acuity patient increases the wait time for high-acuity patients: pre-triage delay and transition-delay. Thus the paper identifies ways to reduce high-acuity patients' wait time, including: reducing the standard deviation or mean of the transition delay in preemption, preventing transition delays by providing vertical or "fast track" treatment to more low-acuity patients; and designing wait time information systems to divert (especially when the ED is highly congested) low-acuity patients that do not need ED treatment.
{"title":"Low-Acuity Patients Delay High-Acuity Patients in an Emergency Department","authors":"M. Bayati, Sara Kwasnick, Danqi Luo, E. Plambeck","doi":"10.2139/ssrn.3095039","DOIUrl":"https://doi.org/10.2139/ssrn.3095039","url":null,"abstract":"This paper provides evidence that in an emergency department (ED), the arrival of an additional low-acuity patient substantially increases the wait time to start of treatment for high-acuity patients. This contradicts a long-standing prior conclusion in the medical literature that this effect is \"negligible\". The prior methodology underestimates the effect by neglecting how delays are propagated in queueing systems. In contrast, this paper develops and validates a new estimation method based on queueing theory, machine learning, and causal inference. Wait time information displayed to low-acuity patients provide a quasi-randomized instrumental variable, and is used to correct for omitted variable bias. Through a combination of empirical and queueing theoretic analyses, this paper identifies the two primary mechanisms by which a low-acuity patient increases the wait time for high-acuity patients: pre-triage delay and transition-delay. Thus the paper identifies ways to reduce high-acuity patients' wait time, including: reducing the standard deviation or mean of the transition delay in preemption, preventing transition delays by providing vertical or \"fast track\" treatment to more low-acuity patients; and designing wait time information systems to divert (especially when the ED is highly congested) low-acuity patients that do not need ED treatment.","PeriodicalId":275253,"journal":{"name":"Operations Research eJournal","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131755404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lennart Baardman, Igor Levin, G. Perakis, Divya Singhvi
Many firms regularly introduce new products. Before the launch of any new product, firms need to make various operational decisions, which are guided by the sales forecast. The new product sales forecasting problem is challenging when compared to forecasting sales of existing products. For existing products, historical sales data gives an indicator of future sales, but this data is not available for a new product. We propose a novel sales forecasting model that is estimated with data of comparable products introduced in the past. We formulate the problem of clustering products and fitting forecasting models to these clusters simultaneously. Inherently, the model has a large number of parameters, which can lead to an overly complex model. Hence, we add regularization to the model so that it can estimate sparse models. This problem is computationally hard, and as a result, we develop a scalable algorithm that produces a forecasting model with good analytical guarantees on the prediction error. In close collaboration with our industry partner Johnson & Johnson Consumer Companies Inc., a major fast moving consumer goods manufacturer, we test our approach on real datasets, after which we check the robustness of our results with data from a large fast fashion retailer. We show that, compared to several widely used forecasting methods, our approach improves MAPE and WMAPE by 20-60% across various product segments compared to several widely used forecasting methods. Additionally, for the consumer goods manufacturer, we develop a fast and easy-to-use Excel tool that aids managers with forecasting and making decisions before a new product launch.
{"title":"Leveraging Comparables for New Product Sales Forecasting","authors":"Lennart Baardman, Igor Levin, G. Perakis, Divya Singhvi","doi":"10.2139/ssrn.3086237","DOIUrl":"https://doi.org/10.2139/ssrn.3086237","url":null,"abstract":"Many firms regularly introduce new products. Before the launch of any new product, firms need to make various operational decisions, which are guided by the sales forecast. The new product sales forecasting problem is challenging when compared to forecasting sales of existing products. For existing products, historical sales data gives an indicator of future sales, but this data is not available for a new product. We propose a novel sales forecasting model that is estimated with data of comparable products introduced in the past. We formulate the problem of clustering products and fitting forecasting models to these clusters simultaneously. Inherently, the model has a large number of parameters, which can lead to an overly complex model. Hence, we add regularization to the model so that it can estimate sparse models. This problem is computationally hard, and as a result, we develop a scalable algorithm that produces a forecasting model with good analytical guarantees on the prediction error. In close collaboration with our industry partner Johnson & Johnson Consumer Companies Inc., a major fast moving consumer goods manufacturer, we test our approach on real datasets, after which we check the robustness of our results with data from a large fast fashion retailer. We show that, compared to several widely used forecasting methods, our approach improves MAPE and WMAPE by 20-60% across various product segments compared to several widely used forecasting methods. Additionally, for the consumer goods manufacturer, we develop a fast and easy-to-use Excel tool that aids managers with forecasting and making decisions before a new product launch.","PeriodicalId":275253,"journal":{"name":"Operations Research eJournal","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130860461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}