Stephanie Kelley, Anton Ovchinnikov, D. Hardoon, Adrienne Heinrich
Problem definition: We use a realistically large, publicly available data set from a global fintech lender to simulate the impact of different antidiscrimination laws and their corresponding data management and model-building regimes on gender-based discrimination in the nonmortgage fintech lending setting. Academic/practical relevance: Our paper extends the conceptual understanding of model-based discrimination from computer science to a realistic context that simulates the situations faced by fintech lenders in practice, where advanced machine learning (ML) techniques are used with high-dimensional, feature-rich, highly multicollinear data. We provide technically and legally permissible approaches for firms to reduce discrimination across different antidiscrimination regimes whilst managing profitability. Methodology: We train statistical and ML models on a large and realistically rich publicly available data set to simulate different antidiscrimination regimes and measure their impact on model quality and firm profitability. We use ML explainability techniques to understand the drivers of ML discrimination. Results: We find that regimes that prohibit the use of gender (like those in the United States) substantially increase discrimination and slightly decrease firm profitability. We observe that ML models are less discriminatory, of better predictive quality, and more profitable compared with traditional statistical models like logistic regression. Unlike omitted variable bias—which drives discrimination in statistical models—ML discrimination is driven by changes in the model training procedure, including feature engineering and feature selection, when gender is excluded. We observe that down sampling the training data to rebalance gender, gender-aware hyperparameter selection, and up sampling the training data to rebalance gender all reduce discrimination, with varying trade-offs in predictive quality and firm profitability. Probabilistic gender proxy modeling (imputing applicant gender) further reduces discrimination with negligible impact on predictive quality and a slight increase in firm profitability. Managerial implications: A rethink is required of the antidiscrimination laws, specifically with respect to the collection and use of protected attributes for ML models. Firms should be able to collect protected attributes to, at minimum, measure discrimination and ideally, take steps to reduce it. Increased data access should come with greater accountability for firms.
{"title":"Antidiscrimination Laws, Artificial Intelligence, and Gender Bias: A Case Study in Nonmortgage Fintech Lending","authors":"Stephanie Kelley, Anton Ovchinnikov, D. Hardoon, Adrienne Heinrich","doi":"10.1287/msom.2022.1108","DOIUrl":"https://doi.org/10.1287/msom.2022.1108","url":null,"abstract":"Problem definition: We use a realistically large, publicly available data set from a global fintech lender to simulate the impact of different antidiscrimination laws and their corresponding data management and model-building regimes on gender-based discrimination in the nonmortgage fintech lending setting. Academic/practical relevance: Our paper extends the conceptual understanding of model-based discrimination from computer science to a realistic context that simulates the situations faced by fintech lenders in practice, where advanced machine learning (ML) techniques are used with high-dimensional, feature-rich, highly multicollinear data. We provide technically and legally permissible approaches for firms to reduce discrimination across different antidiscrimination regimes whilst managing profitability. Methodology: We train statistical and ML models on a large and realistically rich publicly available data set to simulate different antidiscrimination regimes and measure their impact on model quality and firm profitability. We use ML explainability techniques to understand the drivers of ML discrimination. Results: We find that regimes that prohibit the use of gender (like those in the United States) substantially increase discrimination and slightly decrease firm profitability. We observe that ML models are less discriminatory, of better predictive quality, and more profitable compared with traditional statistical models like logistic regression. Unlike omitted variable bias—which drives discrimination in statistical models—ML discrimination is driven by changes in the model training procedure, including feature engineering and feature selection, when gender is excluded. We observe that down sampling the training data to rebalance gender, gender-aware hyperparameter selection, and up sampling the training data to rebalance gender all reduce discrimination, with varying trade-offs in predictive quality and firm profitability. Probabilistic gender proxy modeling (imputing applicant gender) further reduces discrimination with negligible impact on predictive quality and a slight increase in firm profitability. Managerial implications: A rethink is required of the antidiscrimination laws, specifically with respect to the collection and use of protected attributes for ML models. Firms should be able to collect protected attributes to, at minimum, measure discrimination and ideally, take steps to reduce it. Increased data access should come with greater accountability for firms.","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"54 3 1","pages":"3039-3059"},"PeriodicalIF":0.0,"publicationDate":"2022-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83233835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
W. Mao, Liu Ming, Ying Rong, Christopher S. Tang, Huan Zheng
This paper describes the operations of most on-demand meal delivery platforms and discusses how empirical research can improve the operational performance of these platforms. To support and encourage more studies on the operations of on-demand delivery platforms, we provide a unique data set obtained from a meal delivery platform in China. This data set contains operational level data sampled from July 1 to August 31, 2015, in Hangzhou, China. The data set includes information about order placements, order deliveries, restaurants, drivers, weather and traffic conditions, and so on. We also review recent studies on meal delivery platforms and suggest research opportunities for improving delivery performance.
{"title":"On-Demand Meal Delivery Platforms: Operational Level Data and Research Opportunities","authors":"W. Mao, Liu Ming, Ying Rong, Christopher S. Tang, Huan Zheng","doi":"10.1287/msom.2022.1112","DOIUrl":"https://doi.org/10.1287/msom.2022.1112","url":null,"abstract":"This paper describes the operations of most on-demand meal delivery platforms and discusses how empirical research can improve the operational performance of these platforms. To support and encourage more studies on the operations of on-demand delivery platforms, we provide a unique data set obtained from a meal delivery platform in China. This data set contains operational level data sampled from July 1 to August 31, 2015, in Hangzhou, China. The data set includes information about order placements, order deliveries, restaurants, drivers, weather and traffic conditions, and so on. We also review recent studies on meal delivery platforms and suggest research opportunities for improving delivery performance.","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"21 1","pages":"2535-2542"},"PeriodicalIF":0.0,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86870982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: Although the analytical literature extensively studies distribution channels, empirical evidence on the value of omnichannel distribution is limited, especially for the omnichannel used by manufacturing companies to fulfill retail orders. I empirically evaluate the extent to which the dual–distribution center (dual-DC) distribution channel and the factory-direct distribution channel contribute to fulfilling orders from retail stores compared with the traditional single–distribution center (single-DC) channel. Academic/practical relevance: Many manufacturing companies develop their distribution omnichannel to fulfill retail orders. To make proper decisions on various channels, they need to understand the trade-offs between different order-fulfillment measures and costs in different distribution channels. Methodology: I exploit two switches in distribution channels of a manufacturing company to its retail customers: one from single-DC to dual-DC distribution and the other from single-DC to factory-direct distribution. To account for the trade-offs between order fulfillment and the costs associated with each distribution channel, I develop three equations for fill rate, lead time, and production and distribution costs in a difference-in-difference framework and then estimate the equations using proprietary data of retail orders and delivery records. Results: The results quantify the contributions of distribution channels to order fulfillment. Compared with the single-DC distribution channel, the dual-DC distribution channel raises the fill rate by 0.4% and reduces the lead time by 9.7% without incurring additional costs, whereas the factory-direct distribution channel increases the fill rate by 0.5% and provides a 5.2% cost savings but extends the lead time by 12.5%. I further analyze these contributions to order fulfillment across demand variability and order quantity. Managerial implications: The findings provide manufacturing companies with valuable knowledge of their distribution channel choices and means to find a cost-effective distribution channel to improve order fulfillment for various customers and products.
{"title":"Omnichannel Distribution to Fulfill Retail Orders","authors":"X. Wan","doi":"10.1287/msom.2022.1104","DOIUrl":"https://doi.org/10.1287/msom.2022.1104","url":null,"abstract":"Problem definition: Although the analytical literature extensively studies distribution channels, empirical evidence on the value of omnichannel distribution is limited, especially for the omnichannel used by manufacturing companies to fulfill retail orders. I empirically evaluate the extent to which the dual–distribution center (dual-DC) distribution channel and the factory-direct distribution channel contribute to fulfilling orders from retail stores compared with the traditional single–distribution center (single-DC) channel. Academic/practical relevance: Many manufacturing companies develop their distribution omnichannel to fulfill retail orders. To make proper decisions on various channels, they need to understand the trade-offs between different order-fulfillment measures and costs in different distribution channels. Methodology: I exploit two switches in distribution channels of a manufacturing company to its retail customers: one from single-DC to dual-DC distribution and the other from single-DC to factory-direct distribution. To account for the trade-offs between order fulfillment and the costs associated with each distribution channel, I develop three equations for fill rate, lead time, and production and distribution costs in a difference-in-difference framework and then estimate the equations using proprietary data of retail orders and delivery records. Results: The results quantify the contributions of distribution channels to order fulfillment. Compared with the single-DC distribution channel, the dual-DC distribution channel raises the fill rate by 0.4% and reduces the lead time by 9.7% without incurring additional costs, whereas the factory-direct distribution channel increases the fill rate by 0.5% and provides a 5.2% cost savings but extends the lead time by 12.5%. I further analyze these contributions to order fulfillment across demand variability and order quantity. Managerial implications: The findings provide manufacturing companies with valuable knowledge of their distribution channel choices and means to find a cost-effective distribution channel to improve order fulfillment for various customers and products.","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"73 1","pages":"2150-2165"},"PeriodicalIF":0.0,"publicationDate":"2022-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86368045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: Cash transfer programs (CTPs) have spread in the last decade to help fight extreme poverty in different parts of the world. A key issue here is to ensure that the cash is distributed to the targeted beneficiaries in an appropriate manner to meet the goals of the programs. How do we design efficient and egalitarian allocation rules for these programs? Academic/practical relevance: Big data and machine learning have been used recently by several CTPs to target the right beneficiaries (those living in extreme poverty). We demonstrate how these targeting methods can be integrated into the cash allocation problem to synthesize the impact of targeting errors on the design of the allocation rules. In particular, when the targeting errors are “well calibrated,” a simple predictive allocation rule is already optimal. Finally, although we only focus on the problem of poverty reduction (efficiency), the optimality conditions ensure that these allocation rules provide a common ex ante service guarantee for each beneficiary in the allocation outcome (egalitarian). Methodology: We design allocation rules to minimize a key indicator in poverty reduction—the squared gap of the shortfall between the income/consumption and the poverty line. The rules differ in how the targeting error distribution is being utilized. Robust and online convex optimization are applied for the analysis. We also modify our allocation rules to ensure that the cash is spread more evenly across the pool of beneficiaries to reduce the (potential) negative effect on nonbeneficiary households living close to the poverty line but missing the benefits of the CTPs because of imperfect targeting. Results: Given a targeting method, we compare and contrast the performance of different allocation rules—predictive, stochastic, and robust. We derive closed-form solutions to predictive and stochastic allocation models and use robust allocation to mitigate the negative impact of imperfect targeting. Moreover, we show that the robust allocation decision can be efficiently computed using online convex optimization. Managerial implications: Using real data from a CTP in Malawi, we demonstrate how a suitable choice of allocation rule can improve both the efficiency and egalitarian objectives of the CTP. The technique can be suitably modified to ensure that the wealth distribution after allocation is “smoother,” reducing the bunching effect that may be undesirable in some circumstances.
{"title":"From Targeting to Transfer: Design of Allocation Rules in Cash Transfer Programs","authors":"Huan Zheng, Guodong Lyu, Jiannan Ke, C. Teo","doi":"10.1287/msom.2022.1101","DOIUrl":"https://doi.org/10.1287/msom.2022.1101","url":null,"abstract":"Problem definition: Cash transfer programs (CTPs) have spread in the last decade to help fight extreme poverty in different parts of the world. A key issue here is to ensure that the cash is distributed to the targeted beneficiaries in an appropriate manner to meet the goals of the programs. How do we design efficient and egalitarian allocation rules for these programs? Academic/practical relevance: Big data and machine learning have been used recently by several CTPs to target the right beneficiaries (those living in extreme poverty). We demonstrate how these targeting methods can be integrated into the cash allocation problem to synthesize the impact of targeting errors on the design of the allocation rules. In particular, when the targeting errors are “well calibrated,” a simple predictive allocation rule is already optimal. Finally, although we only focus on the problem of poverty reduction (efficiency), the optimality conditions ensure that these allocation rules provide a common ex ante service guarantee for each beneficiary in the allocation outcome (egalitarian). Methodology: We design allocation rules to minimize a key indicator in poverty reduction—the squared gap of the shortfall between the income/consumption and the poverty line. The rules differ in how the targeting error distribution is being utilized. Robust and online convex optimization are applied for the analysis. We also modify our allocation rules to ensure that the cash is spread more evenly across the pool of beneficiaries to reduce the (potential) negative effect on nonbeneficiary households living close to the poverty line but missing the benefits of the CTPs because of imperfect targeting. Results: Given a targeting method, we compare and contrast the performance of different allocation rules—predictive, stochastic, and robust. We derive closed-form solutions to predictive and stochastic allocation models and use robust allocation to mitigate the negative impact of imperfect targeting. Moreover, we show that the robust allocation decision can be efficiently computed using online convex optimization. Managerial implications: Using real data from a CTP in Malawi, we demonstrate how a suitable choice of allocation rule can improve both the efficiency and egalitarian objectives of the CTP. The technique can be suitably modified to ensure that the wealth distribution after allocation is “smoother,” reducing the bunching effect that may be undesirable in some circumstances.","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"60 1","pages":"2901-2924"},"PeriodicalIF":0.0,"publicationDate":"2022-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84881187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaolin Wang, Yuanguang Zhong, Lishuai Li, Wei Xie, Z. Ye
Problem definition: Warranty reserves are funds used to fulfill future warranty obligations for a product. In this paper, we investigate the warranty reserve planning problem faced by a manufacturing firm who manages warranties for multiple products. Academic/practical relevance: It is nontrivial to determine a proper amount of reserves to hold, because warranty expenditures are random in nature and reserving either excess or insufficient cash would incur losses. How can warranty reserve levels be optimized and promptly adjusted is a focal issue, especially for firms selling multiple products. Methodology: Inspired by the general pattern of empirical warranty claims data, we first develop an aggregate warranty cost (AWC) forecasting model for a single product by coupling stochastic product sales and failure processes, which can be used to plan for warranty reserves periodically. The reserve levels are then optimized via a distributionally robust approach, because the exact distribution of AWC is generally unknown. To reduce the losses generated from managing the funds, we further investigate two potential loss-reduction approaches: demand learning and funds pooling. Results: For the demand learning algorithm, we prove that, as the sales period grows, the optimal learning parameter asymptotically converges to a constant in a fairly fast rate; our simulation experiments show that the performance of demand learning is promising and robust under general warranty claim patterns. Moreover, we find that the benefits of funds pooling change over different stages of the warranty life cycle; in particular, the relative pooling benefit in terms of reserve losses is nonincreasing over time. Managerial implications: This study offers guidelines on how manufacturers should adaptively forecast and dynamically plan warranty reserves over the warranty life cycle.
{"title":"Warranty Reserve Management: Demand Learning and Funds Pooling","authors":"Xiaolin Wang, Yuanguang Zhong, Lishuai Li, Wei Xie, Z. Ye","doi":"10.1287/msom.2022.1086","DOIUrl":"https://doi.org/10.1287/msom.2022.1086","url":null,"abstract":"Problem definition: Warranty reserves are funds used to fulfill future warranty obligations for a product. In this paper, we investigate the warranty reserve planning problem faced by a manufacturing firm who manages warranties for multiple products. Academic/practical relevance: It is nontrivial to determine a proper amount of reserves to hold, because warranty expenditures are random in nature and reserving either excess or insufficient cash would incur losses. How can warranty reserve levels be optimized and promptly adjusted is a focal issue, especially for firms selling multiple products. Methodology: Inspired by the general pattern of empirical warranty claims data, we first develop an aggregate warranty cost (AWC) forecasting model for a single product by coupling stochastic product sales and failure processes, which can be used to plan for warranty reserves periodically. The reserve levels are then optimized via a distributionally robust approach, because the exact distribution of AWC is generally unknown. To reduce the losses generated from managing the funds, we further investigate two potential loss-reduction approaches: demand learning and funds pooling. Results: For the demand learning algorithm, we prove that, as the sales period grows, the optimal learning parameter asymptotically converges to a constant in a fairly fast rate; our simulation experiments show that the performance of demand learning is promising and robust under general warranty claim patterns. Moreover, we find that the benefits of funds pooling change over different stages of the warranty life cycle; in particular, the relative pooling benefit in terms of reserve losses is nonincreasing over time. Managerial implications: This study offers guidelines on how manufacturers should adaptively forecast and dynamically plan warranty reserves over the warranty life cycle.","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"79 1","pages":"2221-2239"},"PeriodicalIF":0.0,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82585549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: Complementary sourcing, with which a product depends on both a supplier’s and a manufacturer’s engineering and production efforts, is ubiquitous in modern supply chains. A unique feature of complementary sourcing is that efforts by one party enhance the marginal value of the other party’s efforts. Whereas this positive spillover effect can benefit both parties, it is well-established in the literature that it paradoxically induces a first-mover disadvantage; neither party is willing to exert efforts ex ante, resulting in significant lost opportunities for improving sourcing performance. The question we consider in this paper is whether the first-mover disadvantage is a valid concern in more realistic sourcing environments in which the market is risky and price is endogenous. Methodology/results: We analyze a sequential-investment model and investigate how market risk and endogenous pricing affect the first-mover disadvantage. In the presence of market risk, the first mover may face greater market uncertainty than the second mover and, thus, is at an apparent disadvantage. Surprisingly, we find the introduction of market risk can favor the first mover. In effect, the presence of market risk weakens the second mover’s ability to free ride on the first mover’s investment, which increases the leverage of the first mover. This finding persists with exogenous pricing even if the first mover has weak power. Managerial implications: Our results suggest that the first-mover disadvantage identified in the extant literature ignores the operational aspect of practical sourcing environments, and sourcing managers should recognize that advance effort investment is often beneficial in more realistic complementary sourcing environments.
{"title":"Investment Efforts Under Complementary Sourcing: The Role of Market Risk and Endogenous Pricing","authors":"Yimin Wang, Rui Yin, Xiangjing Chen, S. Webster","doi":"10.1287/msom.2022.1096","DOIUrl":"https://doi.org/10.1287/msom.2022.1096","url":null,"abstract":"Problem definition: Complementary sourcing, with which a product depends on both a supplier’s and a manufacturer’s engineering and production efforts, is ubiquitous in modern supply chains. A unique feature of complementary sourcing is that efforts by one party enhance the marginal value of the other party’s efforts. Whereas this positive spillover effect can benefit both parties, it is well-established in the literature that it paradoxically induces a first-mover disadvantage; neither party is willing to exert efforts ex ante, resulting in significant lost opportunities for improving sourcing performance. The question we consider in this paper is whether the first-mover disadvantage is a valid concern in more realistic sourcing environments in which the market is risky and price is endogenous. Methodology/results: We analyze a sequential-investment model and investigate how market risk and endogenous pricing affect the first-mover disadvantage. In the presence of market risk, the first mover may face greater market uncertainty than the second mover and, thus, is at an apparent disadvantage. Surprisingly, we find the introduction of market risk can favor the first mover. In effect, the presence of market risk weakens the second mover’s ability to free ride on the first mover’s investment, which increases the leverage of the first mover. This finding persists with exogenous pricing even if the first mover has weak power. Managerial implications: Our results suggest that the first-mover disadvantage identified in the extant literature ignores the operational aspect of practical sourcing environments, and sourcing managers should recognize that advance effort investment is often beneficial in more realistic complementary sourcing environments.","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"14 1","pages":"2595-2610"},"PeriodicalIF":0.0,"publicationDate":"2022-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84874317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: Condition monitoring (CM) of durable assets, whereby sensors continuously monitor the health of an asset, is heralded as a key application of the Internet of Things. However, questions about ownership of the sensor data are seen as a key barrier to adoption. We model an after-sales supply chain in which the asset manufacturer provides maintenance and repair services to a customer that operates the asset. The asset condition deteriorates in a stochastic fashion and will eventually fail if not repaired. Methodology/results: We analyze a performance-based contracting problem considering manufacturer maintenance effort (the condition at which preventive maintenance is performed) and customer operating effort (which reduces the rate of condition deterioration). With information asymmetry on the customer’s effort cost, we analyze this contracting problem in a principal-agent model with double moral hazard. In the centralized setting, we establish that the benefit of CM increases and then decreases in the asset’s deterioration rate and that CM may increase or decrease the benefit of customer effort depending on the deterioration rate. In the decentralized setting, we prove that CM always benefits the manufacturer and the supply chain, but it may hurt the customer if the asset reliability is sufficiently high. Managerial implications: These results have important implications for the effect of sensor-data ownership. The manufacturer will adopt CM if it owns the data, but the customer may block CM adoption if it owns the data. We show that this CM adoption barrier can be overcome by the manufacturer offering to pay an appropriate data access fee. However, under this arrangement, the manufacturer may not benefit from a more-effective customer operating effort. We discuss the resulting implication on the manufacturer’s product design and the business model between selling and leasing.
{"title":"After-Sales Service Contracting: Condition Monitoring and Data Ownership","authors":"Cuihong Li, Brian Tomlin","doi":"10.1287/msom.2022.1095","DOIUrl":"https://doi.org/10.1287/msom.2022.1095","url":null,"abstract":"Problem definition: Condition monitoring (CM) of durable assets, whereby sensors continuously monitor the health of an asset, is heralded as a key application of the Internet of Things. However, questions about ownership of the sensor data are seen as a key barrier to adoption. We model an after-sales supply chain in which the asset manufacturer provides maintenance and repair services to a customer that operates the asset. The asset condition deteriorates in a stochastic fashion and will eventually fail if not repaired. Methodology/results: We analyze a performance-based contracting problem considering manufacturer maintenance effort (the condition at which preventive maintenance is performed) and customer operating effort (which reduces the rate of condition deterioration). With information asymmetry on the customer’s effort cost, we analyze this contracting problem in a principal-agent model with double moral hazard. In the centralized setting, we establish that the benefit of CM increases and then decreases in the asset’s deterioration rate and that CM may increase or decrease the benefit of customer effort depending on the deterioration rate. In the decentralized setting, we prove that CM always benefits the manufacturer and the supply chain, but it may hurt the customer if the asset reliability is sufficiently high. Managerial implications: These results have important implications for the effect of sensor-data ownership. The manufacturer will adopt CM if it owns the data, but the customer may block CM adoption if it owns the data. We show that this CM adoption barrier can be overcome by the manufacturer offering to pay an appropriate data access fee. However, under this arrangement, the manufacturer may not benefit from a more-effective customer operating effort. We discuss the resulting implication on the manufacturer’s product design and the business model between selling and leasing.","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"3 1","pages":"1494-1510"},"PeriodicalIF":0.0,"publicationDate":"2022-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88843467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: Diabetes is a highly prevalent and expensive chronic disease that affects millions of Americans and is associated with multiple comorbidities. Clinical research has found long-term variation in a patient’s glycated hemoglobin levels to be linked with adverse health outcomes, such as increased hospitalizations. Consequently, there is a need for innovative approaches to reduce long-term glycemic variability and efficient ways to implement them. Academic/practical relevance: Although the operations literature has extensively explored ways to manage variability across patients, relatively little attention has been paid to within-patient variability. We draw on the management and healthcare literatures to hypothesize and then show that a key operational lever—continuity of care (CoC)—can be used to reduce glycemic variability, which in turn, improves patient health. In the process, we explore the moderating role of a key demographic characteristic: patient’s marital status. We also shed light on an important mechanism through which CoC reduces variability—adherence of patients to prescribed medications—thereby advancing the compliance literature. Academically, our study adds to the understanding of the importance of managing variability (via continuity in service) in settings where customers repeatedly interact with service providers. Methodology: We use a detailed and comprehensive data set from the Veterans Health Administration, the largest integrated healthcare delivery system in the United States. This permits us to control for potential sources of heterogeneity. We analyze more than 300,000 patients—over an 11-year period—with diabetes, a chronic disease whose successful management requires managing glycemic variability. We use an empirical approach to, first, quantify the relationship between CoC and glycemic variability and second, show how this relationship differs based on patient’s marital status. Third, we estimate the mediation effect of patients’ adherence to medications. Finally, we quantify how glycemic variability mediates the relationship between CoC and three important outcomes. Our findings are validated by extensive robustness checks and sensitivity analyses. Results: We find that CoC is related to reductions in glycemic variability, more so for patients who are not married. However, this reduction is not linear in continuity; we find evidence of curvilinearity but with a sufficiently high stationary point so that benefits almost always accrue, albeit at a diminishing rate. Additionally, we find that one mechanism through which CoC may reduce variability is through patients’ adherence to medications. We also find evidence of partial mediation for glycemic variability in the CoC outcomes process chain. Our counterfactual analysis reveals the extent of improvement that enhanced continuity can bring, depending on where it is targeted. Managerial implications: Identifying the process measures through which continui
{"title":"An Operations Approach for Reducing Glycemic Variability: Evidence from a Primary Care Setting","authors":"V. Ahuja, Carlos A. Alvarez, B. Staats","doi":"10.1287/msom.2022.1089","DOIUrl":"https://doi.org/10.1287/msom.2022.1089","url":null,"abstract":"Problem definition: Diabetes is a highly prevalent and expensive chronic disease that affects millions of Americans and is associated with multiple comorbidities. Clinical research has found long-term variation in a patient’s glycated hemoglobin levels to be linked with adverse health outcomes, such as increased hospitalizations. Consequently, there is a need for innovative approaches to reduce long-term glycemic variability and efficient ways to implement them. Academic/practical relevance: Although the operations literature has extensively explored ways to manage variability across patients, relatively little attention has been paid to within-patient variability. We draw on the management and healthcare literatures to hypothesize and then show that a key operational lever—continuity of care (CoC)—can be used to reduce glycemic variability, which in turn, improves patient health. In the process, we explore the moderating role of a key demographic characteristic: patient’s marital status. We also shed light on an important mechanism through which CoC reduces variability—adherence of patients to prescribed medications—thereby advancing the compliance literature. Academically, our study adds to the understanding of the importance of managing variability (via continuity in service) in settings where customers repeatedly interact with service providers. Methodology: We use a detailed and comprehensive data set from the Veterans Health Administration, the largest integrated healthcare delivery system in the United States. This permits us to control for potential sources of heterogeneity. We analyze more than 300,000 patients—over an 11-year period—with diabetes, a chronic disease whose successful management requires managing glycemic variability. We use an empirical approach to, first, quantify the relationship between CoC and glycemic variability and second, show how this relationship differs based on patient’s marital status. Third, we estimate the mediation effect of patients’ adherence to medications. Finally, we quantify how glycemic variability mediates the relationship between CoC and three important outcomes. Our findings are validated by extensive robustness checks and sensitivity analyses. Results: We find that CoC is related to reductions in glycemic variability, more so for patients who are not married. However, this reduction is not linear in continuity; we find evidence of curvilinearity but with a sufficiently high stationary point so that benefits almost always accrue, albeit at a diminishing rate. Additionally, we find that one mechanism through which CoC may reduce variability is through patients’ adherence to medications. We also find evidence of partial mediation for glycemic variability in the CoC outcomes process chain. Our counterfactual analysis reveals the extent of improvement that enhanced continuity can bring, depending on where it is targeted. Managerial implications: Identifying the process measures through which continui","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"93 1","pages":"1474-1493"},"PeriodicalIF":0.0,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74226668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: We study the combined value of observing future demand realizations (partial demand visibility) and flexible capacity, two hedging mechanisms against demand uncertainty, when signing capacity contracts with short temporal commitment. Academic/practical relevance: With new technological innovations, short commitment contracts are found in dynamic environments like distribution, processing, and manufacturing, a trend likely to grow in the future. In contrast to classic procurement, where commitments are long, short commitments lead to new dynamics in which demand visibility allows companies to use flexible resources more efficiently by adapting to demand observations. Methodology: We incorporate flexible capacity and demand visibility simultaneously using a multiperiod newsvendor network model with two nodes that are supplied using dedicated and flexible capacity contracts with short temporal commitment. Results: The optimal commitment to capacity contracts adapts within bounds to the observed demand at each node. The ability to adapt to visible demand becomes more valuable when flexible capacity contracts are available. This allows us to show that demand visibility and flexible capacity can act as complements. Managerial implications: In contrast to conventional wisdom, when contracts have short commitment, companies can enhance the value of demand visibility if flexible capacity is also available as an option.
{"title":"The Value of Information and Flexibility with Temporal Commitments","authors":"Pol Boada-Collado, S. Chopra, K. Smilowitz","doi":"10.1287/msom.2022.1090","DOIUrl":"https://doi.org/10.1287/msom.2022.1090","url":null,"abstract":"Problem definition: We study the combined value of observing future demand realizations (partial demand visibility) and flexible capacity, two hedging mechanisms against demand uncertainty, when signing capacity contracts with short temporal commitment. Academic/practical relevance: With new technological innovations, short commitment contracts are found in dynamic environments like distribution, processing, and manufacturing, a trend likely to grow in the future. In contrast to classic procurement, where commitments are long, short commitments lead to new dynamics in which demand visibility allows companies to use flexible resources more efficiently by adapting to demand observations. Methodology: We incorporate flexible capacity and demand visibility simultaneously using a multiperiod newsvendor network model with two nodes that are supplied using dedicated and flexible capacity contracts with short temporal commitment. Results: The optimal commitment to capacity contracts adapts within bounds to the observed demand at each node. The ability to adapt to visible demand becomes more valuable when flexible capacity contracts are available. This allows us to show that demand visibility and flexible capacity can act as complements. Managerial implications: In contrast to conventional wisdom, when contracts have short commitment, companies can enhance the value of demand visibility if flexible capacity is also available as an option.","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"17 1","pages":"2098-2115"},"PeriodicalIF":0.0,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90206394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: There is a concerted effort across multiple academic disciplines to understand the recall decision-making process. Specifically, what steps does a manufacturer take following a product defect discovery and resulting in the product recall decision? This effort has often been limited to case studies within a particular manufacturer, largely due to the absence of consistent and comparable data across firms. Methodology/results: This data paper provides a foundation for future research on recall decisions by processing and coding textual disclosures on 2,120 recalls initiated in the United States by 27 automobile manufacturers from 2009 to 2018. For each recall, the data set provides the time the firm took to make the recall decision by comparing the defect awareness date to the recall decision date, whether the recall was associated with a supplier, the number of events in the recall decision-making process, and the date and description of each event. Managerial implications: Not only can these data enhance product recall research by providing key recall decision-making variables unavailable in related research, but an additional indication of the value of our data set also comes from National Highway Traffic Safety Administration (NHTSA), the automobile regulator in the United States. We held discussions with a senior leader at the NHTSA’s Recall Management Division related to this data set. This discussion revealed that the NHTSA does not have these data in an analyzable form and that they might be interested in using our data set for their reports, such as the NHTSA’s biennial reports to the U.S. Congress. This signal suggests that regulators, as well as researchers, practitioners, and other safety advocates, may find our data set useful.
{"title":"The Recall Decision Exposed: Automobile Recall Timing and Process Data Set","authors":"Vivek Astvansh, George P. Ball, Matthew A. Josefy","doi":"10.1287/msom.2022.1085","DOIUrl":"https://doi.org/10.1287/msom.2022.1085","url":null,"abstract":"Problem definition: There is a concerted effort across multiple academic disciplines to understand the recall decision-making process. Specifically, what steps does a manufacturer take following a product defect discovery and resulting in the product recall decision? This effort has often been limited to case studies within a particular manufacturer, largely due to the absence of consistent and comparable data across firms. Methodology/results: This data paper provides a foundation for future research on recall decisions by processing and coding textual disclosures on 2,120 recalls initiated in the United States by 27 automobile manufacturers from 2009 to 2018. For each recall, the data set provides the time the firm took to make the recall decision by comparing the defect awareness date to the recall decision date, whether the recall was associated with a supplier, the number of events in the recall decision-making process, and the date and description of each event. Managerial implications: Not only can these data enhance product recall research by providing key recall decision-making variables unavailable in related research, but an additional indication of the value of our data set also comes from National Highway Traffic Safety Administration (NHTSA), the automobile regulator in the United States. We held discussions with a senior leader at the NHTSA’s Recall Management Division related to this data set. This discussion revealed that the NHTSA does not have these data in an analyzable form and that they might be interested in using our data set for their reports, such as the NHTSA’s biennial reports to the U.S. Congress. This signal suggests that regulators, as well as researchers, practitioners, and other safety advocates, may find our data set useful.","PeriodicalId":18108,"journal":{"name":"Manuf. Serv. Oper. Manag.","volume":"87 1","pages":"1457-1473"},"PeriodicalIF":0.0,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84877722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}