Product bundling is a common marketing strategy for cross-selling in multiproduct firms. Motivated by settings in online product recommendation, we propose a new approach, dubbed bundle recommendation and pricing (BRP), to enhance the performance of bundle recommendation system. BRP keeps all the separately priced products in the recommended set, and adds a subset of products as a new bundle with a discounted price to customers. This approach extends pure bundling (PB), where all the products are sold in a single bundle with a discounted price to customers. Although PB can be more profitable than component pricing (CP) where products are priced and sold separately, it can be inferior to CP in the presence of high marginal cost. We show that such a simple "CP + one bundle" scheme can be more profitable than both PB and CP, and is near optimal in many environments.
BRP improves CP by extracting the deadweight loss, but retains the profitability of CP when some products have relatively high marginal costs. However, finding the optimal BRP solution is often intractable. We develop a new approximation to this problem and use a Bayesian optimization algorithm to optimize the bundle selection and pricing decisions. Extensive numerical results show that our algorithm outperforms other common heuristics. More importantly, by simply adding one more bundle option to the common CP mechanism, our results show that BRP tends to significantly increase both the monopolist's profit and customers' utility as compared with CP and PB.
{"title":"Product Bundle Recommendation and Pricing: How to Make It Work?","authors":"Hailong Sun, Xiaobo Li, C. Teo","doi":"10.2139/ssrn.3874843","DOIUrl":"https://doi.org/10.2139/ssrn.3874843","url":null,"abstract":"Product bundling is a common marketing strategy for cross-selling in multiproduct firms. Motivated by settings in online product recommendation, we propose a new approach, dubbed bundle recommendation and pricing (BRP), to enhance the performance of bundle recommendation system. BRP keeps all the separately priced products in the recommended set, and adds a subset of products as a new bundle with a discounted price to customers. This approach extends pure bundling (PB), where all the products are sold in a single bundle with a discounted price to customers. Although PB can be more profitable than component pricing (CP) where products are priced and sold separately, it can be inferior to CP in the presence of high marginal cost. We show that such a simple \"CP + one bundle\" scheme can be more profitable than both PB and CP, and is near optimal in many environments.<br><br>BRP improves CP by extracting the deadweight loss, but retains the profitability of CP when some products have relatively high marginal costs. However, finding the optimal BRP solution is often intractable. We develop a new approximation to this problem and use a Bayesian optimization algorithm to optimize the bundle selection and pricing decisions. Extensive numerical results show that our algorithm outperforms other common heuristics. More importantly, by simply adding one more bundle option to the common CP mechanism, our results show that BRP tends to significantly increase both the monopolist's profit and customers' utility as compared with CP and PB.","PeriodicalId":432943,"journal":{"name":"DecisionSciRN: Simulation Based Optimization (Topic)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125031175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a variety of applications, decisions need to be made dynamically after receiving imperfect observations about the state of an underlying system. Partially Observable Markov Decision Processes (POMDPs) are widely used in such applications. To use a POMDP, however, a decision-maker must have access to reliable estimations of core state and observation transition probabilities under each possible state and action pair. This is often challenging mainly due to lack of ample data, especially when some actions are not taken frequently enough in practice. This significantly limits the application of POMDPs in real world settings. In healthcare, for example, medical tests are typically subject to false-positive and false-negative errors, and hence, the decision-maker has imperfect information about the health state of a patient. Furthermore, since some treatment options have not been recommended or explored in the past, data cannot be used to reliably estimate all the required transition probabilities regarding the health state of the patient. We introduce an extension of POMDPs, termed Robust POMDPs (RPOMDPs), which allows dynamic decision-making when there is ambiguity regarding transition probabilities. This extension enables making robust decisions by reducing the reliance on a single probabilistic model of transitions, while still allowing for imperfect state observations. We develop dynamic programming equations for solving RPOMDPs, provide a sucient statistic and an information state, discuss ways in which their computational complexity can be reduced, and connect them to stochastic zero-sum games with imperfect private monitoring.
{"title":"Robust Partially Observable Markov Decision Processes","authors":"M. Rasouli, S. Saghafian","doi":"10.2139/ssrn.3195310","DOIUrl":"https://doi.org/10.2139/ssrn.3195310","url":null,"abstract":"In a variety of applications, decisions need to be made dynamically after receiving imperfect observations about the state of an underlying system. Partially Observable Markov Decision Processes (POMDPs) are widely used in such applications. To use a POMDP, however, a decision-maker must have access to reliable estimations of core state and observation transition probabilities under each possible state and action pair. This is often challenging mainly due to lack of ample data, especially when some actions are not taken frequently enough in practice. This significantly limits the application of POMDPs in real world settings. In healthcare, for example, medical tests are typically subject to false-positive and false-negative errors, and hence, the decision-maker has imperfect information about the health state of a patient. Furthermore, since some treatment options have not been recommended or explored in the past, data cannot be used to reliably estimate all the required transition probabilities regarding the health state of the patient. We introduce an extension of POMDPs, termed Robust POMDPs (RPOMDPs), which allows dynamic decision-making when there is ambiguity regarding transition probabilities. This extension enables making robust decisions by reducing the reliance on a single probabilistic model of transitions, while still allowing for imperfect state observations. We develop dynamic programming equations for solving RPOMDPs, provide a sucient statistic and an information state, discuss ways in which their computational complexity can be reduced, and connect them to stochastic zero-sum games with imperfect private monitoring.","PeriodicalId":432943,"journal":{"name":"DecisionSciRN: Simulation Based Optimization (Topic)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132647241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}