We address a very important problem in offshore wind farm design, namely, the combined optimization of the turbine location and of the connection cables required to bring the electrical power produced by the turbines to a given substation, and eventually to shore. We first describe a mixed-integer linear programming model that combines previous proposals from the literature. Then we improve it by a number of additional inequalities intended to strengthen its linear programming relaxation. In particular, we propose new classes of Benders-like cuts derived from an induced-clique substructure of the problem. The validity of these cuts is established in a purely combinatorial way, without resorting to Benders’s standard duality theory, and efficient separation procedures are proposed. The practical effectiveness of the proposed cuts is established through computational tests, showing that they do improve very significantly the dual bound provided by the standard model. We also present an exact branch-and-cut solver for the problem, which separates the new cuts at run time. Computational results confirm that the new cuts are instrumental for the success of our exact solver. This paper was accepted by Chung-Piaw Teo, optimization.
{"title":"Integrated Layout and Cable Routing in Wind Farm Optimal Design","authors":"Martina Fischetti, M. Fischetti","doi":"10.1287/mnsc.2022.4470","DOIUrl":"https://doi.org/10.1287/mnsc.2022.4470","url":null,"abstract":"We address a very important problem in offshore wind farm design, namely, the combined optimization of the turbine location and of the connection cables required to bring the electrical power produced by the turbines to a given substation, and eventually to shore. We first describe a mixed-integer linear programming model that combines previous proposals from the literature. Then we improve it by a number of additional inequalities intended to strengthen its linear programming relaxation. In particular, we propose new classes of Benders-like cuts derived from an induced-clique substructure of the problem. The validity of these cuts is established in a purely combinatorial way, without resorting to Benders’s standard duality theory, and efficient separation procedures are proposed. The practical effectiveness of the proposed cuts is established through computational tests, showing that they do improve very significantly the dual bound provided by the standard model. We also present an exact branch-and-cut solver for the problem, which separates the new cuts at run time. Computational results confirm that the new cuts are instrumental for the success of our exact solver. This paper was accepted by Chung-Piaw Teo, optimization.","PeriodicalId":18208,"journal":{"name":"Manag. Sci.","volume":"20 1","pages":"2147-2164"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88841346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many markets, buyers sign advance contracts before actual decisions on transactions or consumptions are made. Therefore, a buyer may have private information on expected payoff at the contracting stage, and as time moves on, new information on other components of payoff may arrive. However, prior information can be losable, forgettable, or unattended. In this paper, we investigate how limited memory may influence the optimal design of contracts for sequential screening. Despite memory loss, the buyer can make ex post inference about her initially informed type from the chosen contract. As ex ante screening facilitates subsequent retrospection, the chosen contract can serve as a self-reminding instrument. This would yield an endogenous demand for separation in ex ante contract choice. In response, distortions in the optimal contract design can be either mitigated or intensified, leading to improved or undermined social welfare, respectively. As a result, the equilibrium buyer surplus can be higher than that under perfect memory. We also show that the buyer can exhibit the so-called flat-rate bias, even though her preference is time consistent and perfectly predicted. In addition, as memory can be perfectly recovered from the equilibrium contract choice, investing on any other memory-improving instrument is redundant. Moreover, the buyer’s demand for screening can induce her to choose dominated refund contract. Nevertheless, when dominance must be obeyed, the seller may offer a menu of refund contracts with two-way distortions. This paper was accepted by Matthew Shum, marketing.
{"title":"The Mnemonomics of Contractual Screening","authors":"Liang Guo","doi":"10.2139/ssrn.3930657","DOIUrl":"https://doi.org/10.2139/ssrn.3930657","url":null,"abstract":"In many markets, buyers sign advance contracts before actual decisions on transactions or consumptions are made. Therefore, a buyer may have private information on expected payoff at the contracting stage, and as time moves on, new information on other components of payoff may arrive. However, prior information can be losable, forgettable, or unattended. In this paper, we investigate how limited memory may influence the optimal design of contracts for sequential screening. Despite memory loss, the buyer can make ex post inference about her initially informed type from the chosen contract. As ex ante screening facilitates subsequent retrospection, the chosen contract can serve as a self-reminding instrument. This would yield an endogenous demand for separation in ex ante contract choice. In response, distortions in the optimal contract design can be either mitigated or intensified, leading to improved or undermined social welfare, respectively. As a result, the equilibrium buyer surplus can be higher than that under perfect memory. We also show that the buyer can exhibit the so-called flat-rate bias, even though her preference is time consistent and perfectly predicted. In addition, as memory can be perfectly recovered from the equilibrium contract choice, investing on any other memory-improving instrument is redundant. Moreover, the buyer’s demand for screening can induce her to choose dominated refund contract. Nevertheless, when dominance must be obeyed, the seller may offer a menu of refund contracts with two-way distortions. This paper was accepted by Matthew Shum, marketing.","PeriodicalId":18208,"journal":{"name":"Manag. Sci.","volume":"5 1","pages":"1739-1757"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89084211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florida, an important state in presidential elections in the United States, has received considerable media coverage in recent years for long lines to vote. Do some segments of the population receive a disproportionate share of the resources to serve the voting process, which could encourage some or dissuade others from voting? We conduct the first empirical panel data study to examine whether minority and Democrat voters in Florida experience lower poll worker staffing, which could lengthen the time to vote. We do not find evidence of a disparity directly due to race. Instead, we observe a political party effect—all else equal, a 1% increase in the percentage of voters registered as Democrat in a county increases the number of registered voters per poll worker by 3.5%. This effect appears to be meaningful—using a voting queue simulation, a 5% increase in voters registered as Democrat in a county could increase the average wait time to vote from 40 minutes (the approximate average wait time to vote in Florida in 2012 and the highest average wait time across all states in that election per the Cooperative Congressional Election Study) to about 115 minutes. This paper was accepted by Vishal Gaur, operations management.
{"title":"Serving Democracy: Evidence of Voting Resource Disparity in Florida","authors":"Gérard P. Cachon, Dawson Kaaua","doi":"10.1287/mnsc.2022.4497","DOIUrl":"https://doi.org/10.1287/mnsc.2022.4497","url":null,"abstract":"Florida, an important state in presidential elections in the United States, has received considerable media coverage in recent years for long lines to vote. Do some segments of the population receive a disproportionate share of the resources to serve the voting process, which could encourage some or dissuade others from voting? We conduct the first empirical panel data study to examine whether minority and Democrat voters in Florida experience lower poll worker staffing, which could lengthen the time to vote. We do not find evidence of a disparity directly due to race. Instead, we observe a political party effect—all else equal, a 1% increase in the percentage of voters registered as Democrat in a county increases the number of registered voters per poll worker by 3.5%. This effect appears to be meaningful—using a voting queue simulation, a 5% increase in voters registered as Democrat in a county could increase the average wait time to vote from 40 minutes (the approximate average wait time to vote in Florida in 2012 and the highest average wait time across all states in that election per the Cooperative Congressional Election Study) to about 115 minutes. This paper was accepted by Vishal Gaur, operations management.","PeriodicalId":18208,"journal":{"name":"Manag. Sci.","volume":"23 1","pages":"6687-6696"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89076166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The availability of consumer data is inducing a growing number of firms to adopt more personalized pricing policies. This affects both the performance of, and the competition between, alternative distribution channels, which in turn has implications for firms’ distribution strategies. We develop a formal model to examine a brand manufacturer’s choice between mono distribution (selling only through its own direct channel) or dual distribution (selling through an independent retailer as well). We consider different demand patterns, covering both horizontal and vertical differentiation and different pricing regimes, with the manufacturer and retailer each charging personalized prices or a uniform price. We show that dual distribution is optimal for a large number of cases. In particular, this is always the case when the channels are horizontally differentiated, regardless of the pricing regime; moreover, if both firms charge personalized prices, a well-designed wholesale tariff allows them to extract the entire consumer surplus. These insights obtained here for the case of intrabrand competition between vertically related firms are thus in stark contrast to those obtained for interbrand competition, where personalized pricing dissipates industry profit. With vertical differentiation, dual distribution remains optimal if the manufacturer charges a uniform price. By contrast, under personalized pricing, mono distribution can be optimal when the retailer does not expand demand sufficiently. Interestingly, the industry profit may be largest in a hybrid pricing regime, in which the manufacturer forgoes the use of personalized pricing and only the retailer charges personalized prices. This paper was accepted by Joshua Gans, business strategy.
{"title":"Personalized Pricing and Distribution Strategies","authors":"B. Jullien, Markus Reisinger, P. Rey","doi":"10.1287/mnsc.2022.4437","DOIUrl":"https://doi.org/10.1287/mnsc.2022.4437","url":null,"abstract":"The availability of consumer data is inducing a growing number of firms to adopt more personalized pricing policies. This affects both the performance of, and the competition between, alternative distribution channels, which in turn has implications for firms’ distribution strategies. We develop a formal model to examine a brand manufacturer’s choice between mono distribution (selling only through its own direct channel) or dual distribution (selling through an independent retailer as well). We consider different demand patterns, covering both horizontal and vertical differentiation and different pricing regimes, with the manufacturer and retailer each charging personalized prices or a uniform price. We show that dual distribution is optimal for a large number of cases. In particular, this is always the case when the channels are horizontally differentiated, regardless of the pricing regime; moreover, if both firms charge personalized prices, a well-designed wholesale tariff allows them to extract the entire consumer surplus. These insights obtained here for the case of intrabrand competition between vertically related firms are thus in stark contrast to those obtained for interbrand competition, where personalized pricing dissipates industry profit. With vertical differentiation, dual distribution remains optimal if the manufacturer charges a uniform price. By contrast, under personalized pricing, mono distribution can be optimal when the retailer does not expand demand sufficiently. Interestingly, the industry profit may be largest in a hybrid pricing regime, in which the manufacturer forgoes the use of personalized pricing and only the retailer charges personalized prices. This paper was accepted by Joshua Gans, business strategy.","PeriodicalId":18208,"journal":{"name":"Manag. Sci.","volume":"16 1","pages":"1687-1702"},"PeriodicalIF":0.0,"publicationDate":"2022-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73891829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ahmad M. Ashkanani, Benjamin B. Dunford, K. Mumford
We study the joint effects of motivation and workload on human servers’ service time. Using operational and survey data from a call center with a pooled queue structure and limited financial incentives, we examine how individual differences between servers’ trait intrinsic motivation (IM) and extrinsic motivation (EM) impact their average offline, online, and total service times in response to changing workloads. We find significant differences in the patterns of workload and service time relationships across different stages of the service request between servers possessing different combinations of trait motivation. For example, servers with a combination of high IM and low EM were approximately 15% (161%) faster in processing the offline portion of service requests than their peers with the opposite combination (low and high) when workload levels were low (high), respectively. In contrast, servers with high IM-low EM were approximately 35% (5%) slower in processing the online portion of service requests than their low IM-high EM counterparts when workload levels were low (high), respectively. Our findings suggest important nuances in how servers with different trait motivation types respond to changing workload across different stages of the service request. The behavioral pattern shown by high IM-low EM servers is consistent with the preferences of productivity-seeking call center managers who favor speedup and slowdown at certain stages of the service request, conditional to workload. These findings underscore the importance of accounting for trait-based individual differences for a more complete understanding of the complex relationship between workload and service time. This paper was accepted by Charles Corbett, operations management.
{"title":"Impact of Motivation and Workload on Service Time Components: An Empirical Analysis of Call Center Operations","authors":"Ahmad M. Ashkanani, Benjamin B. Dunford, K. Mumford","doi":"10.1287/mnsc.2022.4491","DOIUrl":"https://doi.org/10.1287/mnsc.2022.4491","url":null,"abstract":"We study the joint effects of motivation and workload on human servers’ service time. Using operational and survey data from a call center with a pooled queue structure and limited financial incentives, we examine how individual differences between servers’ trait intrinsic motivation (IM) and extrinsic motivation (EM) impact their average offline, online, and total service times in response to changing workloads. We find significant differences in the patterns of workload and service time relationships across different stages of the service request between servers possessing different combinations of trait motivation. For example, servers with a combination of high IM and low EM were approximately 15% (161%) faster in processing the offline portion of service requests than their peers with the opposite combination (low and high) when workload levels were low (high), respectively. In contrast, servers with high IM-low EM were approximately 35% (5%) slower in processing the online portion of service requests than their low IM-high EM counterparts when workload levels were low (high), respectively. Our findings suggest important nuances in how servers with different trait motivation types respond to changing workload across different stages of the service request. The behavioral pattern shown by high IM-low EM servers is consistent with the preferences of productivity-seeking call center managers who favor speedup and slowdown at certain stages of the service request, conditional to workload. These findings underscore the importance of accounting for trait-based individual differences for a more complete understanding of the complex relationship between workload and service time. This paper was accepted by Charles Corbett, operations management.","PeriodicalId":18208,"journal":{"name":"Manag. Sci.","volume":"99 1","pages":"6697-6715"},"PeriodicalIF":0.0,"publicationDate":"2022-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81231546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study the learning problem in contextual search, which is motivated by applications such as crowdsourcing and personalized medicine experiments. In particular, for a sequence of arriving context vectors, with each context associated with an underlying value, the decision maker either makes a query at a certain point or skips the context. The decision maker will only observe the binary feedback on the relationship between the query point and the value associated with the context. We study a probably approximately correct learning setting, where the goal is to learn the underlying mean value function in context with a minimum number of queries. To address this challenge, we propose a trisection search approach combined with a margin-based active learning method. We show that the algorithm only needs to make [Formula: see text] queries to achieve an ε-estimation accuracy. This sample complexity significantly reduces the required sample complexity in the passive setting where neither sample skipping nor query selection is allowed, which is at least [Formula: see text]. This paper was accepted by J. George Shanthikumar, data science.
在本文中,我们研究了上下文搜索中的学习问题,这是由众包和个性化医学实验等应用驱动的。特别是,对于到达的上下文向量序列,每个上下文都与一个底层值相关联,决策者要么在某个点进行查询,要么跳过上下文。决策者将只观察关于查询点和与上下文关联的值之间关系的二元反馈。我们研究了一个可能近似正确的学习设置,其目标是使用最少的查询次数来学习上下文中的底层均值函数。为了解决这一挑战,我们提出了一种结合基于边缘的主动学习方法的三切分搜索方法。我们表明,该算法只需要进行[Formula: see text]查询即可达到ε-估计精度。这种样本复杂度显著降低了被动设置中所需的样本复杂度,在被动设置中,既不允许跳过样本,也不允许查询选择,这至少是[公式:见文本]。这篇论文被数据科学的J. George Shanthikumar接受。
{"title":"Active Learning for Contextual Search with Binary Feedback","authors":"Xi Chen, Quanquan C. Liu, Yining Wang","doi":"10.1287/mnsc.2022.4473","DOIUrl":"https://doi.org/10.1287/mnsc.2022.4473","url":null,"abstract":"In this paper, we study the learning problem in contextual search, which is motivated by applications such as crowdsourcing and personalized medicine experiments. In particular, for a sequence of arriving context vectors, with each context associated with an underlying value, the decision maker either makes a query at a certain point or skips the context. The decision maker will only observe the binary feedback on the relationship between the query point and the value associated with the context. We study a probably approximately correct learning setting, where the goal is to learn the underlying mean value function in context with a minimum number of queries. To address this challenge, we propose a trisection search approach combined with a margin-based active learning method. We show that the algorithm only needs to make [Formula: see text] queries to achieve an ε-estimation accuracy. This sample complexity significantly reduces the required sample complexity in the passive setting where neither sample skipping nor query selection is allowed, which is at least [Formula: see text]. This paper was accepted by J. George Shanthikumar, data science.","PeriodicalId":18208,"journal":{"name":"Manag. Sci.","volume":"91 1","pages":"2165-2181"},"PeriodicalIF":0.0,"publicationDate":"2022-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85811630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The design of performance based incentives—commonly used in online labor platforms—can be naturally can be naturally posed as a moral hazard principal-agent problem. In this setting, a key input to the principal’s optimal contracting problem is the agent’s production function: the dependence of agent output on effort. Although agent production is classically assumed to be known to the principal, this is unlikely to be the case in practice. Motivated by the design of performance-based incentives, we present a method for estimating a principal-agent model from data on incentive contracts and associated outcomes, with a focus on estimating agent production. The proposed estimator is statistically consistent and can be expressed as a mathematical program. To circumvent computational challenges with solving the estimation problem exactly, we approximate it as an integer program, which we solve through a column generation algorithm that uses hypothesis tests to select variables. We show that our approximation scheme and solution technique both preserve the estimator’s consistency and combine to dramatically reduce the computational time required to obtain sound estimates. To demonstrate our method, we conducted an experiment on a crowdwork platform (Amazon Mechanical Turk) by randomly assigning incentive contracts with varying pay rates among a pool of workers completing the same task. We present numerical results illustrating how our estimator combined with experimentation can shed light on the efficacy of performance-based incentives. This paper was accepted by Chung Piaw Teo, optimization.
{"title":"Estimating Effects of Incentive Contracts in Online Labor Platforms","authors":"Nur Kaynar, Auyon Siddiq","doi":"10.1287/mnsc.2022.4450","DOIUrl":"https://doi.org/10.1287/mnsc.2022.4450","url":null,"abstract":"The design of performance based incentives—commonly used in online labor platforms—can be naturally can be naturally posed as a moral hazard principal-agent problem. In this setting, a key input to the principal’s optimal contracting problem is the agent’s production function: the dependence of agent output on effort. Although agent production is classically assumed to be known to the principal, this is unlikely to be the case in practice. Motivated by the design of performance-based incentives, we present a method for estimating a principal-agent model from data on incentive contracts and associated outcomes, with a focus on estimating agent production. The proposed estimator is statistically consistent and can be expressed as a mathematical program. To circumvent computational challenges with solving the estimation problem exactly, we approximate it as an integer program, which we solve through a column generation algorithm that uses hypothesis tests to select variables. We show that our approximation scheme and solution technique both preserve the estimator’s consistency and combine to dramatically reduce the computational time required to obtain sound estimates. To demonstrate our method, we conducted an experiment on a crowdwork platform (Amazon Mechanical Turk) by randomly assigning incentive contracts with varying pay rates among a pool of workers completing the same task. We present numerical results illustrating how our estimator combined with experimentation can shed light on the efficacy of performance-based incentives. This paper was accepted by Chung Piaw Teo, optimization.","PeriodicalId":18208,"journal":{"name":"Manag. Sci.","volume":"2 1","pages":"2106-2126"},"PeriodicalIF":0.0,"publicationDate":"2022-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83669367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to thrive, organizations need to build and maintain an ability to meet unexpected external challenges. Yet many organizations are sluggish: their capabilities can only undergo incremental changes over time. What are the stochastic processes governing “routinely occurring” challenges that best prepare a sluggish organization for unexpected challenges? We address this question with a stylized principal-agent model. The “agent” represents a sluggish organization that can only change its capability by one unit at a time, and the “principal” represents the organization’s head or its competitive environment. The principal commits ex ante to a Markov process over challenge levels. We characterize the process that maximizes long-run capability for both myopic and arbitrarily patient agents. We show how stochastic, time-varying challenges dramatically improve a sluggish organization’s preparedness for sudden challenges. This paper was accepted by Joshua Gans, business strategy.
{"title":"Capability Building in Sluggish Organizations","authors":"K. Eliaz, R. Spiegler","doi":"10.1287/mnsc.2022.4445","DOIUrl":"https://doi.org/10.1287/mnsc.2022.4445","url":null,"abstract":"In order to thrive, organizations need to build and maintain an ability to meet unexpected external challenges. Yet many organizations are sluggish: their capabilities can only undergo incremental changes over time. What are the stochastic processes governing “routinely occurring” challenges that best prepare a sluggish organization for unexpected challenges? We address this question with a stylized principal-agent model. The “agent” represents a sluggish organization that can only change its capability by one unit at a time, and the “principal” represents the organization’s head or its competitive environment. The principal commits ex ante to a Markov process over challenge levels. We characterize the process that maximizes long-run capability for both myopic and arbitrarily patient agents. We show how stochastic, time-varying challenges dramatically improve a sluggish organization’s preparedness for sudden challenges. This paper was accepted by Joshua Gans, business strategy.","PeriodicalId":18208,"journal":{"name":"Manag. Sci.","volume":"8 1","pages":"1703-1713"},"PeriodicalIF":0.0,"publicationDate":"2022-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74468366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jaewoo Kim, Seyoung Park, Kyle Peterson, WilsonRyan
We examine the financial reporting quality of special purpose acquisition corporations (SPACs) following a successful merger. We compare a sample of SPACs with completed mergers from 2006 to 2020 to initial public offering (IPO) firms in the same industry covering the same period. Compared with similar IPO firms, SPACs are more likely to restate their financial statements and have internal control weaknesses. We also find that SPACs are more likely to file untimely financial statements, amend previously issued filings, and have comment letters that go more rounds with the Securities and Exchange Commission. This lower reporting quality also results in less informative earnings to investors. Our evidence corroborates concerns from the media, accounting firms, and regulators that SPACs exhibit low financial reporting quality in comparison with IPOs. This paper was accepted by Suraj Srinivasan, accounting.
{"title":"Not Ready for Prime Time: Financial Reporting Quality After SPAC Mergers","authors":"Jaewoo Kim, Seyoung Park, Kyle Peterson, WilsonRyan","doi":"10.2139/ssrn.4079131","DOIUrl":"https://doi.org/10.2139/ssrn.4079131","url":null,"abstract":"We examine the financial reporting quality of special purpose acquisition corporations (SPACs) following a successful merger. We compare a sample of SPACs with completed mergers from 2006 to 2020 to initial public offering (IPO) firms in the same industry covering the same period. Compared with similar IPO firms, SPACs are more likely to restate their financial statements and have internal control weaknesses. We also find that SPACs are more likely to file untimely financial statements, amend previously issued filings, and have comment letters that go more rounds with the Securities and Exchange Commission. This lower reporting quality also results in less informative earnings to investors. Our evidence corroborates concerns from the media, accounting firms, and regulators that SPACs exhibit low financial reporting quality in comparison with IPOs. This paper was accepted by Suraj Srinivasan, accounting.","PeriodicalId":18208,"journal":{"name":"Manag. Sci.","volume":"38 1","pages":"7054-7064"},"PeriodicalIF":0.0,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75696981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop a game-theoretic model to study the incentive for competing manufacturers to share supplier audit information. Based on the audit information, each manufacturer decides whether to source from a common supplier who has uncertain responsibility violation risk or to switch to a backup supplier who has no responsibility violation risk but charges a higher price. When supplier responsibility violation occurs, some consumers boycott the manufacturers involved. Audit information allows a manufacturer to reduce the uncertainty about the risk of the common supplier. We show that audit information sharing may make the manufacturers’ sourcing strategies more or less differentiated. As a result, the information-sharing decision is not monotone in the model parameters. We fully characterize the manufacturers’ equilibrium audit information-sharing and sourcing decisions and establish conditions under which audit information sharing induces the manufacturers to adopt more or less responsible sourcing strategies. We also show that a manufacturer could be better off when the cost premium of sourcing from the backup supplier or the risk of the common supplier becomes higher or the audit information becomes less accurate. We consider several extensions of the base model and demonstrate that the main insights remain mostly valid. This paper was accepted by Charles Corbett, operations management.
{"title":"Supplier Audit Information Sharing and Responsible Sourcing","authors":"Albert Y. Ha, Weixin Shang, Yunjie Wang","doi":"10.1287/mnsc.2022.4358","DOIUrl":"https://doi.org/10.1287/mnsc.2022.4358","url":null,"abstract":"We develop a game-theoretic model to study the incentive for competing manufacturers to share supplier audit information. Based on the audit information, each manufacturer decides whether to source from a common supplier who has uncertain responsibility violation risk or to switch to a backup supplier who has no responsibility violation risk but charges a higher price. When supplier responsibility violation occurs, some consumers boycott the manufacturers involved. Audit information allows a manufacturer to reduce the uncertainty about the risk of the common supplier. We show that audit information sharing may make the manufacturers’ sourcing strategies more or less differentiated. As a result, the information-sharing decision is not monotone in the model parameters. We fully characterize the manufacturers’ equilibrium audit information-sharing and sourcing decisions and establish conditions under which audit information sharing induces the manufacturers to adopt more or less responsible sourcing strategies. We also show that a manufacturer could be better off when the cost premium of sourcing from the backup supplier or the risk of the common supplier becomes higher or the audit information becomes less accurate. We consider several extensions of the base model and demonstrate that the main insights remain mostly valid. This paper was accepted by Charles Corbett, operations management.","PeriodicalId":18208,"journal":{"name":"Manag. Sci.","volume":"24 1","pages":"308-324"},"PeriodicalIF":0.0,"publicationDate":"2022-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91539715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}