Pub Date : 2024-07-25DOI: 10.1007/s11129-024-09286-z
Anja Lambrecht, Catherine Tucker
Digital algorithms try to display content that engages consumers. To do this, algorithms need to overcome a ‘cold-start problem’ by swiftly learning whether content engages users. This requires feedback from users. The algorithm targets segments of users. However, if there are fewer individuals in a targeted segment of users, simply because this group is rarer in the population, this could lead to uneven outcomes for minority relative to majority groups. This is because individuals in a minority segment are proportionately more likely to be test subjects for experimental content that may ultimately be rejected by the platform. We explore in the context of ads that are displayed following searches on Google whether this is indeed the case. Previous research has documented that searches for names associated in a US context with Black people on search engines were more likely to return ads that highlighted the need for a criminal background check than was the case for searches for white people. We implement search advertising campaigns that target ads to searches for Black and white names. Our ads are indeed more likely to be displayed following a search for a Black name, even though the likelihood of clicking was similar. Since Black names are less common, the algorithm learns about the quality of the underlying ad more slowly. As a result, an ad is more likely to persist for searches next to Black names than next to white names. Proportionally more Black name searches are likely to have a low-quality ad shown next to them, even though eventually the ad will be rejected. A second study where ads are placed following searches for terms related to religious discrimination confirms this empirical pattern. Our results suggest that as a practical matter, real-time algorithmic learning can lead minority segments to be more likely to see content that will ultimately be rejected by the algorithm.
{"title":"Apparent algorithmic discrimination and real-time algorithmic learning in digital search advertising","authors":"Anja Lambrecht, Catherine Tucker","doi":"10.1007/s11129-024-09286-z","DOIUrl":"https://doi.org/10.1007/s11129-024-09286-z","url":null,"abstract":"<p>Digital algorithms try to display content that engages consumers. To do this, algorithms need to overcome a ‘cold-start problem’ by swiftly learning whether content engages users. This requires feedback from users. The algorithm targets segments of users. However, if there are fewer individuals in a targeted segment of users, simply because this group is rarer in the population, this could lead to uneven outcomes for minority relative to majority groups. This is because individuals in a minority segment are proportionately more likely to be test subjects for experimental content that may ultimately be rejected by the platform. We explore in the context of ads that are displayed following searches on Google whether this is indeed the case. Previous research has documented that searches for names associated in a US context with Black people on search engines were more likely to return ads that highlighted the need for a criminal background check than was the case for searches for white people. We implement search advertising campaigns that target ads to searches for Black and white names. Our ads are indeed more likely to be displayed following a search for a Black name, even though the likelihood of clicking was similar. Since Black names are less common, the algorithm learns about the quality of the underlying ad more slowly. As a result, an ad is more likely to persist for searches next to Black names than next to white names. Proportionally more Black name searches are likely to have a low-quality ad shown next to them, even though eventually the ad will be rejected. A second study where ads are placed following searches for terms related to religious discrimination confirms this empirical pattern. Our results suggest that as a practical matter, real-time algorithmic learning can lead minority segments to be more likely to see content that will ultimately be rejected by the algorithm.</p>","PeriodicalId":501397,"journal":{"name":"Quantitative Marketing and Economics","volume":"24 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141781799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-05DOI: 10.1007/s11129-024-09284-1
Michael Allan Ribers, Hannes Ullrich
Artificial Intelligence has the potential to improve human decisions in complex environments, but its effectiveness can remain limited if humans hold context-specific private information. Using the empirical example of antibiotic prescribing for urinary tract infections, we show that full automation of prescribing fails to improve on physician decisions. Instead, optimally delegating a share of decisions to physicians, where they possess private diagnostic information, effectively utilizes the complementarity between algorithmic and human decisions. Combining physician and algorithmic decisions can achieve a reduction in inefficient overprescribing of antibiotics by 20.3 percent.
{"title":"Complementarities between algorithmic and human decision-making: The case of antibiotic prescribing","authors":"Michael Allan Ribers, Hannes Ullrich","doi":"10.1007/s11129-024-09284-1","DOIUrl":"https://doi.org/10.1007/s11129-024-09284-1","url":null,"abstract":"<p>Artificial Intelligence has the potential to improve human decisions in complex environments, but its effectiveness can remain limited if humans hold context-specific private information. Using the empirical example of antibiotic prescribing for urinary tract infections, we show that full automation of prescribing fails to improve on physician decisions. Instead, optimally delegating a share of decisions to physicians, where they possess private diagnostic information, effectively utilizes the complementarity between algorithmic and human decisions. Combining physician and algorithmic decisions can achieve a reduction in inefficient overprescribing of antibiotics by 20.3 percent.</p>","PeriodicalId":501397,"journal":{"name":"Quantitative Marketing and Economics","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141550497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-25DOI: 10.1007/s11129-024-09283-2
Mina Ameri, Elisabeth Honka, Ying Xie
The rapid adoption of online streaming and over-the-top services has fundamentally changed at-home entertainment media consumption and given rise to new behaviors which are often characterized by a high intensity of watching (e.g., binge-watching). In this paper, we investigate how the watching intensity affects consumers’ engagement with media franchises in two areas: personal and interactive engagement. The former involves consumers’ adoption and consumption of franchise extensions and the latter concerns consumers’ content generation related to a focal media product they watched. Using individual-level data from an online anime (Japanese cartoons) platform, we find inverse U-shaped effects of watching intensity with the largest effects around three to five hours of watching per day on personal engagement and two to four hours a day on interactive engagement. The positive effects of watching intensity are larger for sequels than other types of franchise extensions. For interactive engagement, our results show that conditional on rating submission, higher watching intensity is associated with higher valence of anime ratings, the most prevalent form of UGC on the platform. We interpret this result as evidence that watching intensity can induce liking.
在线流媒体和超媒体服务的迅速普及从根本上改变了家庭娱乐媒体消费,并催生了以高强度观看(如狂欢式观看)为特征的新行为。在本文中,我们将从个人参与和互动参与两个方面研究观看强度如何影响消费者对媒体特许经营的参与。前者涉及消费者对特许经营延伸产品的采用和消费,后者涉及消费者与他们观看的重点媒体产品相关的内容生成。通过使用一个在线动漫(日本动画片)平台的个人层面数据,我们发现观看强度会产生反 U 型效应,每天观看三到五个小时对个人参与的影响最大,每天观看两到四个小时对互动参与的影响最大。与其他类型的特许经营扩展相比,续集的观看强度所产生的正效应更大。在互动参与方面,我们的结果显示,在提交评分的条件下,观看强度越高,动漫评分的价值越高,而动漫评分是平台上最普遍的 UGC 形式。我们将这一结果解释为观看强度可以诱发喜欢的证据。
{"title":"Watching intensity and media franchise engagement","authors":"Mina Ameri, Elisabeth Honka, Ying Xie","doi":"10.1007/s11129-024-09283-2","DOIUrl":"https://doi.org/10.1007/s11129-024-09283-2","url":null,"abstract":"<p>The rapid adoption of online streaming and over-the-top services has fundamentally changed at-home entertainment media consumption and given rise to new behaviors which are often characterized by a high intensity of watching (e.g., binge-watching). In this paper, we investigate how the watching intensity affects consumers’ engagement with media franchises in two areas: personal and interactive engagement. The former involves consumers’ adoption and consumption of franchise extensions and the latter concerns consumers’ content generation related to a focal media product they watched. Using individual-level data from an online anime (Japanese cartoons) platform, we find inverse U-shaped effects of watching intensity with the largest effects around three to five hours of watching per day on personal engagement and two to four hours a day on interactive engagement. The positive effects of watching intensity are larger for sequels than other types of franchise extensions. For interactive engagement, our results show that conditional on rating submission, higher watching intensity is associated with higher valence of anime ratings, the most prevalent form of UGC on the platform. We interpret this result as evidence that watching intensity can induce liking.</p>","PeriodicalId":501397,"journal":{"name":"Quantitative Marketing and Economics","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s11129-024-09280-5
Edward J. Fox, Hristina Pulgar, John H. Semple
This paper tests a theory of strategic multi-product choice (SMPC) using empirical evidence from a large-scale choice experiment, two smaller longitudinal choice experiments, and multi-market panel data. Multi-product choice involves two stages. In the first stage, the consumer chooses a set of substitutable products, where “set” refers to both the variety of alternatives and the quantities of each. In the second stage, the set is consumed. Assuming consumers are strategic, their consumption decisions will consider both the utility of whichever product is selected for consumption and the expected utility (i.e., value) of the set that remains. SMPC therefore requires a dynamic model. We test two such dynamic models in this paper. These models are derived from a basic random utility framework with a stochastic error term for the utility of each product alternative at the moment of consumption. Despite maintaining state variables for the quantity of every alternative, these SMPC dynamic models offer both a value function and optimal consumption policy in closed form. These structures allow us to test for strategic consumption in the second stage and for optimality of the choice sets selected in the first stage. Data from the large-scale choice experiment and the smaller longitudinal choice experiments support strategic consumer decision-making, consistent with SMPC theory. SMPC theory further predicts that the amount of variety consumers select will be higher for lower consumption rates and lower for higher consumption rates. Evidence from panel data of yogurt purchases supports this prediction. While we find that consumption choices are consistent with SMPC theory, they are not consistent with alternative explanations such as variety seeking or diversification bias. Viewed in its entirety, the empirical evidence presented in this paper confirms that both the choice set selected and the way it is consumed are consistent with dynamic models of future preference uncertainty.
{"title":"Testing a theory of strategic multi-product choice","authors":"Edward J. Fox, Hristina Pulgar, John H. Semple","doi":"10.1007/s11129-024-09280-5","DOIUrl":"https://doi.org/10.1007/s11129-024-09280-5","url":null,"abstract":"<p>This paper tests a theory of strategic multi-product choice (<i>SMPC</i>) using empirical evidence from a large-scale choice experiment, two smaller longitudinal choice experiments, and multi-market panel data. Multi-product choice involves two stages. In the first stage, the consumer chooses a set of substitutable products, where “set” refers to both the variety of alternatives and the quantities of each. In the second stage, the set is consumed. Assuming consumers are strategic, their consumption decisions will consider both the utility of whichever product is selected for consumption and the expected utility (i.e., value) of the set that remains. <i>SMPC</i> therefore requires a dynamic model. We test two such dynamic models in this paper. These models are derived from a basic random utility framework with a stochastic error term for the utility of each product alternative at the moment of consumption. Despite maintaining state variables for the quantity of every alternative, these <i>SMPC</i> dynamic models offer both a value function and optimal consumption policy in closed form. These structures allow us to test for strategic consumption in the second stage and for optimality of the choice sets selected in the first stage. Data from the large-scale choice experiment and the smaller longitudinal choice experiments support strategic consumer decision-making, consistent with <i>SMPC</i> theory. <i>SMPC</i> theory further predicts that the amount of variety consumers select will be higher for lower consumption rates and lower for higher consumption rates. Evidence from panel data of yogurt purchases supports this prediction. While we find that consumption choices are consistent with <i>SMPC</i> theory, they are not consistent with alternative explanations such as variety seeking or diversification bias. Viewed in its entirety, the empirical evidence presented in this paper confirms that both the choice set selected and the way it is consumed are consistent with dynamic models of future preference uncertainty.</p>","PeriodicalId":501397,"journal":{"name":"Quantitative Marketing and Economics","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141506405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1007/s11129-024-09281-4
Jae Hyen Chung, Pradeep Chintagunta, Sanjog Misra
We propose a new approach to simulate the likelihood of the sequential search model. By allowing search costs to be heterogeneous across consumers and products, we directly compute the joint probability of the search and purchase decisions when consumers are searching for the idiosyncratic preference shocks in their utility functions. Under the assumptions of Weitzman’s sequential search algorithm, the proposed procedure recursively makes random draws for each quantity that requires numerical integration while enforcing the conditions stipulated by the algorithm. In an extensive simulation study, we compare the proposed method with existing likelihood simulators that have recently been used to estimate the sequential search model. The proposed method attributes the uncertainty in the search order to the consumer-product-level distribution of search costs and the uncertainty in the purchase decision to the distribution of match values across consumers and products. This results in more precise estimation and an improvement in prediction accuracy. We also show that the proposed method allows for different assumptions on the search cost distribution and that it recovers consumers’ relative preferences even if the utility function and/or the search cost distribution is mis-specified. We then apply our approach to online search data from Expedia for field-data validation. From a substantive perspective, we find that search costs and “position” effects affect products in the lower part of the product listing page more than they do those in the upper part of the page.
{"title":"Simulated maximum likelihood estimation of the sequential search model","authors":"Jae Hyen Chung, Pradeep Chintagunta, Sanjog Misra","doi":"10.1007/s11129-024-09281-4","DOIUrl":"https://doi.org/10.1007/s11129-024-09281-4","url":null,"abstract":"<p>We propose a new approach to simulate the likelihood of the sequential search model. By allowing search costs to be heterogeneous across consumers and products, we directly compute the joint probability of the search and purchase decisions when consumers are searching for the idiosyncratic preference shocks in their utility functions. Under the assumptions of Weitzman’s sequential search algorithm, the proposed procedure recursively makes random draws for each quantity that requires numerical integration while enforcing the conditions stipulated by the algorithm. In an extensive simulation study, we compare the proposed method with existing likelihood simulators that have recently been used to estimate the sequential search model. The proposed method attributes the uncertainty in the search order to the consumer-product-level distribution of search costs and the uncertainty in the purchase decision to the distribution of match values across consumers and products. This results in more precise estimation and an improvement in prediction accuracy. We also show that the proposed method allows for different assumptions on the search cost distribution and that it recovers consumers’ relative preferences even if the utility function and/or the search cost distribution is mis-specified. We then apply our approach to online search data from Expedia for field-data validation. From a substantive perspective, we find that search costs and “position” effects affect products in the lower part of the product listing page more than they do those in the upper part of the page.</p>","PeriodicalId":501397,"journal":{"name":"Quantitative Marketing and Economics","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141169704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-05DOI: 10.1007/s11129-023-09278-5
Günter J. Hitsch, Sanjog Misra, Walter W. Zhang
We present a general framework to target customers using optimal targeting policies, and we document the profit differences from alternative estimates of the optimal targeting policies. Two foundations of the framework are conditional average treatment effects (CATEs) and off-policy evaluation using data with randomized targeting. This policy evaluation approach allows us to evaluate an arbitrary number of different targeting policies using only one randomized data set and thus provides large cost advantages over conducting a corresponding number of field experiments. We use different CATE estimation methods to construct and compare alternative targeting policies. Our particular focus is on the distinction between indirect and direct methods. The indirect methods predict the CATEs using a conditional expectation function estimated on outcome levels, whereas the direct methods specifically predict the treatment effects of targeting. We introduce a new direct estimation method called treatment effect projection (TEP). The TEP is a non-parametric CATE estimator that we regularize using a transformed outcome loss which, in expectation, is identical to a loss that we could construct if the individual treatment effects were observed. The empirical application is to a catalog mailing with a high-dimensional set of customer features. We document the profits of the estimated policies using data from two campaigns conducted one year apart, which allows us to assess the transportability of the predictions to a campaign implemented one year after collecting the training data. All estimates of the optimal targeting policies yield larger profits than uniform policies that target none or all customers. Further, there are significant profit differences across the methods, with the direct estimation methods yielding substantially larger economic value than the indirect methods.
{"title":"Heterogeneous treatment effects and optimal targeting policy evaluation","authors":"Günter J. Hitsch, Sanjog Misra, Walter W. Zhang","doi":"10.1007/s11129-023-09278-5","DOIUrl":"https://doi.org/10.1007/s11129-023-09278-5","url":null,"abstract":"<p>We present a general framework to target customers using optimal targeting policies, and we document the profit differences from alternative estimates of the optimal targeting policies. Two foundations of the framework are conditional average treatment effects (CATEs) and off-policy evaluation using data with randomized targeting. This policy evaluation approach allows us to evaluate an arbitrary number of different targeting policies using only one randomized data set and thus provides large cost advantages over conducting a corresponding number of field experiments. We use different CATE estimation methods to construct and compare alternative targeting policies. Our particular focus is on the distinction between indirect and direct methods. The indirect methods predict the CATEs using a conditional expectation function estimated on outcome levels, whereas the direct methods specifically predict the treatment effects of targeting. We introduce a new direct estimation method called treatment effect projection (TEP). The TEP is a non-parametric CATE estimator that we regularize using a transformed outcome loss which, in expectation, is identical to a loss that we could construct if the individual treatment effects were observed. The empirical application is to a catalog mailing with a high-dimensional set of customer features. We document the profits of the estimated policies using data from two campaigns conducted one year apart, which allows us to assess the transportability of the predictions to a campaign implemented one year after collecting the training data. All estimates of the optimal targeting policies yield larger profits than uniform policies that target none or all customers. Further, there are significant profit differences across the methods, with the direct estimation methods yielding substantially larger economic value than the indirect methods.</p>","PeriodicalId":501397,"journal":{"name":"Quantitative Marketing and Economics","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140603166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-07DOI: 10.1007/s11129-024-09279-y
Avi Goldfarb, Mo Xiao
This paper investigates the incidence of limited attention in a high-stakes business setting: a bar owner may be unable to purge transitory shocks from noisy profit signals when deciding whether to exit. Combining a 24-year monthly panel on the alcohol revenues from every bar in Texas with weather data, we find suggestive evidence that inexperienced, distantly located owners may overreact to the transitory component of revenue relative to the persistent component. This apparent asymmetric response is muted under higher revenue fluctuations. We formulate and estimate a structural model to endogenize attention allocation by owners with different thinking cost. Under the assumptions of the model, we find that 3.9% bars make incorrect exit decisions due to limited attention. As exits are irreversible, permanent decisions, small mistakes at the margin interpreting profit signals can lead to large welfare losses for entrepreneurs.
{"title":"Transitory shocks, limited attention, and a firm’s decision to exit","authors":"Avi Goldfarb, Mo Xiao","doi":"10.1007/s11129-024-09279-y","DOIUrl":"https://doi.org/10.1007/s11129-024-09279-y","url":null,"abstract":"<p>This paper investigates the incidence of limited attention in a high-stakes business setting: a bar owner may be unable to purge transitory shocks from noisy profit signals when deciding whether to exit. Combining a 24-year monthly panel on the alcohol revenues from every bar in Texas with weather data, we find suggestive evidence that inexperienced, distantly located owners may overreact to the transitory component of revenue relative to the persistent component. This apparent asymmetric response is muted under higher revenue fluctuations. We formulate and estimate a structural model to endogenize attention allocation by owners with different thinking cost. Under the assumptions of the model, we find that 3.9% bars make incorrect exit decisions due to limited attention. As exits are irreversible, permanent decisions, small mistakes at the margin interpreting profit signals can lead to large welfare losses for entrepreneurs.</p>","PeriodicalId":501397,"journal":{"name":"Quantitative Marketing and Economics","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140070221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-15DOI: 10.1007/s11129-023-09271-y
Preyas S. Desai, Pranav Jindal
In this paper, we study the incentives of vertically differentiated firms to offer Buy Now, Pay Later (BNPL) in a competitive market. BNPL is a relatively new payment mechanism which, at the point of sale, allows consumers to pay for a product in interest-free installments spread out over a few weeks/months. For a monopolist, offering BNPL is essentially about expanding the market by offering financing to the consumers who cannot afford its product. Therefore, a monopolist is always better-off providing BNPL to its consumers. However, in a competitive environment, offering BNPL is a more complex strategic decision because retailers also need to consider strategic reactions from their competitors. We find that in a competitive situation either of the two retailers might refrain from offering BNPL. This is because when one retailer offers BNPL, the other firm not offering BNPL also benefits from competitive spillovers. Although a monopolist’s benefits from offering BNPL increases in its product quality, in a competitive environment, holding all else constant, a low-quality firm might have more to gain from offering BNPL. In addition to asymmetric equilibria, we also find that there is a symmetric equilibrium in which both retailers offer BNPL. In view of public concerns about possible negative impact of BNPL on consumers, we also study how BNPL consumers’ ignoring the cost of using BNPL can adversely affect them. We find that underestimation of these costs lowers consumers’ welfare, and this reduction in welfare stems from three different sources - (i) higher product prices, (ii) excessive purchase, and (ii) excessive upgrades to the higher quality product.
{"title":"Better with buy now, pay later?: A competitive analysis","authors":"Preyas S. Desai, Pranav Jindal","doi":"10.1007/s11129-023-09271-y","DOIUrl":"https://doi.org/10.1007/s11129-023-09271-y","url":null,"abstract":"<p>In this paper, we study the incentives of vertically differentiated firms to offer Buy Now, Pay Later (BNPL) in a competitive market. BNPL is a relatively new payment mechanism which, at the point of sale, allows consumers to pay for a product in interest-free installments spread out over a few weeks/months. For a monopolist, offering BNPL is essentially about expanding the market by offering financing to the consumers who cannot afford its product. Therefore, a monopolist is always better-off providing BNPL to its consumers. However, in a competitive environment, offering BNPL is a more complex strategic decision because retailers also need to consider strategic reactions from their competitors. We find that in a competitive situation either of the two retailers might refrain from offering BNPL. This is because when one retailer offers BNPL, the other firm not offering BNPL also benefits from competitive spillovers. Although a monopolist’s benefits from offering BNPL increases in its product quality, in a competitive environment, holding all else constant, a low-quality firm might have more to gain from offering BNPL. In addition to asymmetric equilibria, we also find that there is a symmetric equilibrium in which both retailers offer BNPL. In view of public concerns about possible negative impact of BNPL on consumers, we also study how BNPL consumers’ ignoring the cost of using BNPL can adversely affect them. We find that underestimation of these costs lowers consumers’ welfare, and this reduction in welfare stems from three different sources - (i) higher product prices, (ii) excessive purchase, and (ii) excessive upgrades to the higher quality product.</p>","PeriodicalId":501397,"journal":{"name":"Quantitative Marketing and Economics","volume":"175 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138684800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.1007/s11129-023-09273-w
Diego Aparicio, Zachary Metzman, Roberto Rigobon
This paper documents the differences in pricing strategies between online and offline (brick-and-mortar) channels. We collect price data for identical products from leading online grocery retailers in the United States and complement it with offline data for the same products from scanner data. Our findings reveal a consistent pattern: online retailers exhibit higher price dispersion than their offline counterparts. More specifically, online grocers employ price algorithms that amplify price discrimination in three key dimensions: (1) over time (through frequent price changes), (2) across locations (by charging varying prices based on delivery zipcodes), and (3) across sellers (by setting dispersed prices for identical products across rival retailers).
{"title":"The pricing strategies of online grocery retailers","authors":"Diego Aparicio, Zachary Metzman, Roberto Rigobon","doi":"10.1007/s11129-023-09273-w","DOIUrl":"https://doi.org/10.1007/s11129-023-09273-w","url":null,"abstract":"<p>This paper documents the differences in pricing strategies between online and offline (brick-and-mortar) channels. We collect price data for identical products from leading online grocery retailers in the United States and complement it with offline data for the same products from scanner data. Our findings reveal a consistent pattern: online retailers exhibit higher price dispersion than their offline counterparts. More specifically, online grocers employ price algorithms that amplify price discrimination in three key dimensions: (1) over time (through frequent price changes), (2) across locations (by charging varying prices based on delivery zipcodes), and (3) across sellers (by setting dispersed prices for identical products across rival retailers).</p>","PeriodicalId":501397,"journal":{"name":"Quantitative Marketing and Economics","volume":"47 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138521734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}