Huan Cao, Nicholas G. Hall, Guohua Wan, Wenhui Zhao
Problem definition: Intraproject learning in project scheduling involves the use of learning among the similar tasks in a project to improve the overall performance of the project schedule. Under intraproject learning, knowledge gained from completing some tasks in a project is used to execute similar later tasks in the same project more efficiently. We provide the first model and solution algorithms to address this intraproject learning problem. Academic/practical relevance: Intraproject learning is possible when, for example, the difficulty of the tasks becomes better understood, or the efficiency of the resources used becomes better known. Hence, it is necessary to explore the potential of intraproject learning to further improve the performance of project scheduling. Because learning consumes time, firms may underinvest in intraproject learning if they do not recognize its value. Although the project scheduling literature discusses the potential value of using obtained information from learning within the same project, we formally model and optimize the use of intraproject learning in project scheduling. Methodology/results: We model the tradeoff between investing time in learning from completed tasks and achieving reduced durations for subsequent tasks to minimize the total project cost. We show that this problem is intractable. We develop a heuristic that finds near optimal solutions and a strong relaxation that allows some learning from partially completed tasks. Our computational study identifies project characteristics where intraproject learning is most worthwhile. In doing so, it motivates project managers to understand and apply intraproject learning to improve the performance of their projects. A real case is provided by a problem of the Consumer Business Group of Huawei Corporation, for which our model and algorithm provide a greater than 20% improvement in project duration. Managerial implications: We find consistent evidence that projects in general can benefit substantially from intraproject learning, and larger projects benefit more. Our computational studies provide the following insights. First, the benefit from learning varies with the features of the project network, and projects with more complex networks possess greater potential benefit from intraproject learning and deserve more attention to learning opportunities; second, noncritical tasks at an earlier project stage should be learned more extensively; and third, tasks that are more similar (or have more similar processes) to later tasks also deserve more investment in learning. Learning should also be invested more in tasks that have more successors, where knowledge gained can be used repetitively. Funding: This work was supported by the National Natural Science Foundation of China [Grant 71732003 to N. G. Hall and Grants 72131010 and 72232001 to W. Zhao], the Shanghai Subject Chief Scientist Program [Grant 16XD1401700 to G. Wan], and the Program for Professor of Speci
问题定义:项目调度中的项目内学习涉及在项目中类似任务之间使用学习来改进项目进度的整体性能。在项目内学习中,通过完成项目中的某些任务获得的知识用于更有效地执行同一项目中类似的后续任务。我们提供了第一个模型和解决算法来解决这个项目内部学习问题。学术/实践相关性:项目内部学习是可能的,例如,当任务的难度得到更好的理解,或者使用资源的效率得到更好的了解。因此,有必要探索项目内学习的潜力,以进一步提高项目调度的绩效。因为学习需要时间,如果公司没有认识到项目内部学习的价值,他们可能会在项目内部学习上投资不足。尽管项目调度文献讨论了在同一项目中使用从学习中获得的信息的潜在价值,但我们正式建模并优化了项目内学习在项目调度中的使用。方法/结果:我们在投入时间学习完成的任务和减少后续任务的持续时间之间进行权衡,以最小化项目总成本。我们表明这个问题是难以解决的。我们开发了一种启发式方法,它可以找到接近最优的解决方案,并且可以从部分完成的任务中学习一些东西。我们的计算研究确定了项目特征,其中项目内部学习是最值得的。在这样做的过程中,它激励项目经理理解并应用项目内部学习来改进他们的项目绩效。以华为公司消费者业务部的一个实际案例为例,我们的模型和算法使项目工期提高了20%以上。管理意义:我们发现一致的证据表明,项目通常可以从项目内部学习中获益,并且更大的项目受益更多。我们的计算研究提供了以下见解。首先,学习的收益随项目网络的特征而变化,网络越复杂的项目从项目内部学习中获得的潜在收益越大,应该更加重视学习机会;其次,在项目早期阶段的非关键任务应该更广泛地学习;第三,与后面的任务更相似(或有更相似的过程)的任务也值得更多的学习投入。学习还应该更多地投入到有更多后继者的任务中,在这些任务中,获得的知识可以重复使用。基金资助:国家自然科学基金项目[no . 71732003 to n.g. Hall, no . 72131010和no . 72232001 to W. Zhao];上海市学科首席科学家项目[no . 16XD1401700 to G. Wan];上海高等学校特聘教授(东方学者)项目[no . TP2022019 to W. Zhao]。补充材料:在线附录可在https://doi.org/10.1287/msom.2022.0159上获得。
{"title":"Optimal Intraproject Learning","authors":"Huan Cao, Nicholas G. Hall, Guohua Wan, Wenhui Zhao","doi":"10.1287/msom.2022.0159","DOIUrl":"https://doi.org/10.1287/msom.2022.0159","url":null,"abstract":"Problem definition: Intraproject learning in project scheduling involves the use of learning among the similar tasks in a project to improve the overall performance of the project schedule. Under intraproject learning, knowledge gained from completing some tasks in a project is used to execute similar later tasks in the same project more efficiently. We provide the first model and solution algorithms to address this intraproject learning problem. Academic/practical relevance: Intraproject learning is possible when, for example, the difficulty of the tasks becomes better understood, or the efficiency of the resources used becomes better known. Hence, it is necessary to explore the potential of intraproject learning to further improve the performance of project scheduling. Because learning consumes time, firms may underinvest in intraproject learning if they do not recognize its value. Although the project scheduling literature discusses the potential value of using obtained information from learning within the same project, we formally model and optimize the use of intraproject learning in project scheduling. Methodology/results: We model the tradeoff between investing time in learning from completed tasks and achieving reduced durations for subsequent tasks to minimize the total project cost. We show that this problem is intractable. We develop a heuristic that finds near optimal solutions and a strong relaxation that allows some learning from partially completed tasks. Our computational study identifies project characteristics where intraproject learning is most worthwhile. In doing so, it motivates project managers to understand and apply intraproject learning to improve the performance of their projects. A real case is provided by a problem of the Consumer Business Group of Huawei Corporation, for which our model and algorithm provide a greater than 20% improvement in project duration. Managerial implications: We find consistent evidence that projects in general can benefit substantially from intraproject learning, and larger projects benefit more. Our computational studies provide the following insights. First, the benefit from learning varies with the features of the project network, and projects with more complex networks possess greater potential benefit from intraproject learning and deserve more attention to learning opportunities; second, noncritical tasks at an earlier project stage should be learned more extensively; and third, tasks that are more similar (or have more similar processes) to later tasks also deserve more investment in learning. Learning should also be invested more in tasks that have more successors, where knowledge gained can be used repetitively. Funding: This work was supported by the National Natural Science Foundation of China [Grant 71732003 to N. G. Hall and Grants 72131010 and 72232001 to W. Zhao], the Shanghai Subject Chief Scientist Program [Grant 16XD1401700 to G. Wan], and the Program for Professor of Speci","PeriodicalId":49901,"journal":{"name":"M&som-Manufacturing & Service Operations Management","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136293479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryan W. Buell, Kamalini Ramdas, Nazlı Sönmez, Kavitha Srinivasan, Rengaraj Venkatesh
Problem definition: Clients and service providers alike often consider one-on-one service delivery to be ideal, assuming, perhaps unquestioningly, that devoting individualized attention best improves client outcomes. In contrast, in shared service delivery, clients are served in batches and the dynamics of group interaction could lead to increased client engagement, which could improve outcomes. However, the loss of privacy and personal connection might undermine engagement. The engagement dynamics in one-on-one and shared delivery models have not been rigorously studied. To the extent that shared delivery may result in comparable or better engagement than one-on-one delivery, service providers in a broad array of contexts may be able to create more value for clients by delivering service in batches. Methodology/results: We conducted a randomized controlled trial with 1,000 patients who were undergoing glaucoma treatment over a three-year period at a large eye hospital. Using verbatim and behavioral transcripts from more than 20,000 minutes of video recorded during our trial, we examine how shared medical appointments (SMAs), in which patients are served in batches, impact engagement. On average, a patient who experienced SMAs asked 33.3% more questions per minute and made 8.6% more nonquestion comments per minute. Because there were multiple patients in an SMA, this increase in engagement at the individual patient level resulted in patients hearing far more comments in the group setting. Patients in SMAs also exhibited higher levels of nonverbal engagement across a wide array of measures (attentiveness, positivity, head wobbling, or “thalai aattam” in Tamil: a South Indian gesture to signal agreement or understanding, eye contact, and end-of-appointment happiness), relative to patients who attended one-on-one appointments. Managerial implications: These results shed light on the potential for shared service delivery models to increase client engagement and thus enhance service performance. Funding: This work was supported by the Wheeler Institute at London Business School (WIBAD Ramdas_Sonmez CFP19), the Institute of Entrepreneurship and Private Capital at London Business School (IIE_3432_2019), Aravind Eye Hospital, and Harvard Business School. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2021.0012 .
{"title":"Shared Service Delivery Can Increase Client Engagement: A Study of Shared Medical Appointments","authors":"Ryan W. Buell, Kamalini Ramdas, Nazlı Sönmez, Kavitha Srinivasan, Rengaraj Venkatesh","doi":"10.1287/msom.2021.0012","DOIUrl":"https://doi.org/10.1287/msom.2021.0012","url":null,"abstract":"Problem definition: Clients and service providers alike often consider one-on-one service delivery to be ideal, assuming, perhaps unquestioningly, that devoting individualized attention best improves client outcomes. In contrast, in shared service delivery, clients are served in batches and the dynamics of group interaction could lead to increased client engagement, which could improve outcomes. However, the loss of privacy and personal connection might undermine engagement. The engagement dynamics in one-on-one and shared delivery models have not been rigorously studied. To the extent that shared delivery may result in comparable or better engagement than one-on-one delivery, service providers in a broad array of contexts may be able to create more value for clients by delivering service in batches. Methodology/results: We conducted a randomized controlled trial with 1,000 patients who were undergoing glaucoma treatment over a three-year period at a large eye hospital. Using verbatim and behavioral transcripts from more than 20,000 minutes of video recorded during our trial, we examine how shared medical appointments (SMAs), in which patients are served in batches, impact engagement. On average, a patient who experienced SMAs asked 33.3% more questions per minute and made 8.6% more nonquestion comments per minute. Because there were multiple patients in an SMA, this increase in engagement at the individual patient level resulted in patients hearing far more comments in the group setting. Patients in SMAs also exhibited higher levels of nonverbal engagement across a wide array of measures (attentiveness, positivity, head wobbling, or “thalai aattam” in Tamil: a South Indian gesture to signal agreement or understanding, eye contact, and end-of-appointment happiness), relative to patients who attended one-on-one appointments. Managerial implications: These results shed light on the potential for shared service delivery models to increase client engagement and thus enhance service performance. Funding: This work was supported by the Wheeler Institute at London Business School (WIBAD Ramdas_Sonmez CFP19), the Institute of Entrepreneurship and Private Capital at London Business School (IIE_3432_2019), Aravind Eye Hospital, and Harvard Business School. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2021.0012 .","PeriodicalId":49901,"journal":{"name":"M&som-Manufacturing & Service Operations Management","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135243689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: In many resource scheduling problems for services with scheduled starting and completion times (e.g., airport gate assignment), a common approach is to maintain appropriate buffer between successive services assigned to a common resource. With a large buffer, the chances of a “crossing” (i.e., a flight arriving later than the succeeding one at the gate) will be significantly reduced. This approach is often preferred over more sophisticated stochastic mixed-integer programming methods that track the arrival of all the flights to infer the number of “conflicts” (i.e., a flight arriving at a time when the assigned gate becomes unavailable). We provide a theoretical explanation, from the perspective of robust optimization for the good performance of the buffering approach in minimizing not only the number of crossings but also the number of conflicts in the operations. Methodology/results: We show that the buffering method inherently minimizes the worst-case number of “conflicts” under both robust and distributionally robust optimization models using down-monotone uncertainty sets. Interestingly, under down-monotone properties, the worst-case number of crossings is identical to the worst-case number of conflicts. Using this equivalence, we demonstrate how feature information from flight and historical delay information can be used to enhance the effectiveness of the buffering method. Managerial implications: The paper provides the first theoretical justification on the use of buffering method to control for the number of conflicts in resource assignment problem. Funding: This work was supported by the 2019 Academic Research Fund Tier 3 of the Ministry of Education-Singapore [Grant MOE-2019-T3-1-010] and the Research Grants Council of Hong Kong SAR, China [Grant PolyU 152240/17E]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2022.0572 .
{"title":"Buffer Times Between Scheduled Events in Resource Assignment Problem: A Conflict-Robust Perspective","authors":"Jinjia Huang, Chung-Piaw Teo, Fan Wang, Zhou Xu","doi":"10.1287/msom.2022.0572","DOIUrl":"https://doi.org/10.1287/msom.2022.0572","url":null,"abstract":"Problem definition: In many resource scheduling problems for services with scheduled starting and completion times (e.g., airport gate assignment), a common approach is to maintain appropriate buffer between successive services assigned to a common resource. With a large buffer, the chances of a “crossing” (i.e., a flight arriving later than the succeeding one at the gate) will be significantly reduced. This approach is often preferred over more sophisticated stochastic mixed-integer programming methods that track the arrival of all the flights to infer the number of “conflicts” (i.e., a flight arriving at a time when the assigned gate becomes unavailable). We provide a theoretical explanation, from the perspective of robust optimization for the good performance of the buffering approach in minimizing not only the number of crossings but also the number of conflicts in the operations. Methodology/results: We show that the buffering method inherently minimizes the worst-case number of “conflicts” under both robust and distributionally robust optimization models using down-monotone uncertainty sets. Interestingly, under down-monotone properties, the worst-case number of crossings is identical to the worst-case number of conflicts. Using this equivalence, we demonstrate how feature information from flight and historical delay information can be used to enhance the effectiveness of the buffering method. Managerial implications: The paper provides the first theoretical justification on the use of buffering method to control for the number of conflicts in resource assignment problem. Funding: This work was supported by the 2019 Academic Research Fund Tier 3 of the Ministry of Education-Singapore [Grant MOE-2019-T3-1-010] and the Research Grants Council of Hong Kong SAR, China [Grant PolyU 152240/17E]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2022.0572 .","PeriodicalId":49901,"journal":{"name":"M&som-Manufacturing & Service Operations Management","volume":"2012 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134957677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: Last-mile delivery is a critical component of logistics networks, accounting for approximately 30%–35% of costs. As delivery volumes have increased, truck route times have become unsustainably long. To address this issue, many logistics companies, including FedEx and UPS, have resorted to using a “driver aide” to assist with deliveries. The aide can assist the driver in two ways. As a “jumper,” the aide works with the driver in preparing and delivering packages, thus reducing the service time at a given stop. As a “helper,” the aide can independently work at a location delivering packages, and the driver can leave to deliver packages at other locations and then return. Given a set of delivery locations, travel times, service times, jumper’s savings, and helper’s service times, the goal is to determine both the delivery route and the most effective way to use the aide (e.g., sometimes as a jumper and sometimes as a helper) to minimize the total routing time. Methodology/results: We model this problem as an integer program with an exponential number of variables and an exponential number of constraints and propose a branch-cut-and-price approach for solving it. Our computational experiments are based on simulated instances built on real-world data provided by an industrial partner and a data set released by Amazon. The instances based on the Amazon data set show that this novel operation can lead to, on average, a 35.8% reduction in routing time and 22.0% in cost savings. More importantly, our results characterize the conditions under which this novel operation mode can lead to significant savings in terms of both the routing time and cost. Managerial implications: Our computational results show that the driver aide with both jumper and helper modes is most effective when there are denser service regions and when the truck’s speed is higher (≥10 miles per hour). Coupled with an economic analysis, we come up with rules of thumb (that have close to 100% accuracy) to predict whether to use the aide and in which mode. Empirically, we find that the service delivery routes with greater than 50% of the time devoted to delivery (as opposed to driving) are the ones that provide the greatest benefit. These routes are characterized by a high density of delivery locations. Supplemental Material: The e-companion is available at https://doi.org/10.1287/msom.2022.0211 .
{"title":"The Driver-Aide Problem: Coordinated Logistics for Last-Mile Delivery","authors":"S. Raghavan, Rui Zhang","doi":"10.1287/msom.2022.0211","DOIUrl":"https://doi.org/10.1287/msom.2022.0211","url":null,"abstract":"Problem definition: Last-mile delivery is a critical component of logistics networks, accounting for approximately 30%–35% of costs. As delivery volumes have increased, truck route times have become unsustainably long. To address this issue, many logistics companies, including FedEx and UPS, have resorted to using a “driver aide” to assist with deliveries. The aide can assist the driver in two ways. As a “jumper,” the aide works with the driver in preparing and delivering packages, thus reducing the service time at a given stop. As a “helper,” the aide can independently work at a location delivering packages, and the driver can leave to deliver packages at other locations and then return. Given a set of delivery locations, travel times, service times, jumper’s savings, and helper’s service times, the goal is to determine both the delivery route and the most effective way to use the aide (e.g., sometimes as a jumper and sometimes as a helper) to minimize the total routing time. Methodology/results: We model this problem as an integer program with an exponential number of variables and an exponential number of constraints and propose a branch-cut-and-price approach for solving it. Our computational experiments are based on simulated instances built on real-world data provided by an industrial partner and a data set released by Amazon. The instances based on the Amazon data set show that this novel operation can lead to, on average, a 35.8% reduction in routing time and 22.0% in cost savings. More importantly, our results characterize the conditions under which this novel operation mode can lead to significant savings in terms of both the routing time and cost. Managerial implications: Our computational results show that the driver aide with both jumper and helper modes is most effective when there are denser service regions and when the truck’s speed is higher (≥10 miles per hour). Coupled with an economic analysis, we come up with rules of thumb (that have close to 100% accuracy) to predict whether to use the aide and in which mode. Empirically, we find that the service delivery routes with greater than 50% of the time devoted to delivery (as opposed to driving) are the ones that provide the greatest benefit. These routes are characterized by a high density of delivery locations. Supplemental Material: The e-companion is available at https://doi.org/10.1287/msom.2022.0211 .","PeriodicalId":49901,"journal":{"name":"M&som-Manufacturing & Service Operations Management","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135815778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: Modern technologies have made it viable for firms to lower the costs of switching on and off production in response to market changes. In this paper, we explore how production start–stop flexibility impacts joint operating policies, financing, and investment timing decisions. Methodology/results: We develop a continuous-time, optimal stopping model in which equity holders of the firm make operational decisions regarding pausing and restarting production as well as when to default. The degree of production flexibility is measured by switching costs. On the one hand, production flexibility influences the trade-off between tax shields and default costs for the capital structure decision; on the other hand, debt levels impact the equity holders’ incentive to use flexibility by pausing and restarting operations. We find that optimal debt usage is not monotone in production flexibility. Specifically, when switching costs are in a low region, the optimal debt level decreases slowly as switching costs increase. As switching costs increase into an intermediate region, the optimal debt level decreases sharply because the firm needs to reduce its debt to ensure the equity holders maintain flexible operating policies. However, when switching costs exceed a threshold, the cost of compromising the use of debt becomes excessive, and the firm substantially increases its debt to gain the full benefit of the tax shield; in so doing, the equity holders forgo flexibility and maintain production continuously until default. This financing strategy affects the firm’s investment timing decision, which also exhibits a nonmonotone pattern. Managerial implications: When a firm optimizes the debt usage and investment timing, the incentive of utilizing flexibility embedded in the production technology by the equity holders needs to be taken into account. Our findings also reveal new benefits and guidance for the potential design of incentive contracts to mitigate agency costs. Funding: Q. Wu was supported by Weatherhead School of Management Intramural Grant [Grant IG121420-05-QXW132] for this research. Supplemental Material: The online supplement is available at https://doi.org/10.1287/msom.2022.0213 .
{"title":"On the Interplay of Production Flexibility, Capital Structure, and Investment Timing","authors":"Guoming Lai, Peter Ritchken, Qi Wu","doi":"10.1287/msom.2022.0213","DOIUrl":"https://doi.org/10.1287/msom.2022.0213","url":null,"abstract":"Problem definition: Modern technologies have made it viable for firms to lower the costs of switching on and off production in response to market changes. In this paper, we explore how production start–stop flexibility impacts joint operating policies, financing, and investment timing decisions. Methodology/results: We develop a continuous-time, optimal stopping model in which equity holders of the firm make operational decisions regarding pausing and restarting production as well as when to default. The degree of production flexibility is measured by switching costs. On the one hand, production flexibility influences the trade-off between tax shields and default costs for the capital structure decision; on the other hand, debt levels impact the equity holders’ incentive to use flexibility by pausing and restarting operations. We find that optimal debt usage is not monotone in production flexibility. Specifically, when switching costs are in a low region, the optimal debt level decreases slowly as switching costs increase. As switching costs increase into an intermediate region, the optimal debt level decreases sharply because the firm needs to reduce its debt to ensure the equity holders maintain flexible operating policies. However, when switching costs exceed a threshold, the cost of compromising the use of debt becomes excessive, and the firm substantially increases its debt to gain the full benefit of the tax shield; in so doing, the equity holders forgo flexibility and maintain production continuously until default. This financing strategy affects the firm’s investment timing decision, which also exhibits a nonmonotone pattern. Managerial implications: When a firm optimizes the debt usage and investment timing, the incentive of utilizing flexibility embedded in the production technology by the equity holders needs to be taken into account. Our findings also reveal new benefits and guidance for the potential design of incentive contracts to mitigate agency costs. Funding: Q. Wu was supported by Weatherhead School of Management Intramural Grant [Grant IG121420-05-QXW132] for this research. Supplemental Material: The online supplement is available at https://doi.org/10.1287/msom.2022.0213 .","PeriodicalId":49901,"journal":{"name":"M&som-Manufacturing & Service Operations Management","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135816549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: Revenue management in railways distinguishes itself from that in traditional sectors, such as airline, hotel, and fashion retail, in several important ways. (i) Capacity is substantially more flexible in the sense that changes to the capacity of a train can often be made throughout the sales horizon. Consequently, the joint optimization of prices and capacity assumes genuine importance. (ii) Capacity can only be added in discrete “chunks” (i.e., coaches). (iii) Passengers with unreserved tickets can travel in any of the multiple trains available during the day. Further, passengers in unreserved coaches are allowed to travel by standing, thus giving rise to the need to manage congestion. Motivated by our work with a major railway company in Japan, we analyze the problem of jointly optimizing pricing and capacity; this problem is more-general version of the canonical multiproduct dynamic-pricing problem. Methodology/results: Our analysis yields four asymptotically optimal policies. From the viewpoint of the pricing decisions, our policies can be classified into two types—static and dynamic. With respect to the timing of the capacity decisions, our policies are again of two types—fixed capacity and flexible capacity. We establish the convergence rates of these policies; when demand and supply are scaled by a factor [Formula: see text], the optimality gaps of the static policies scale proportional to [Formula: see text], and those of the dynamic policies scale proportional to [Formula: see text]. We illustrate the attractive performance of our policies on a test suite of instances based on real-world operations of the high-speed “Shinkansen” trains in Japan and develop associated insights. Managerial implications: Our work provides railway administrators with simple and effective policies for pricing, capacity, and congestion management. Our policies cater to different contingencies that decision makers may face in practice: the need for static or dynamic prices and for fixed or flexible capacity. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2022.0246 .
{"title":"Dynamic Pricing and Capacity Optimization in Railways","authors":"Chandrasekhar Manchiraju, Milind Dawand, Ganesh Janakiraman, Arvind Raghunathan","doi":"10.1287/msom.2022.0246","DOIUrl":"https://doi.org/10.1287/msom.2022.0246","url":null,"abstract":"Problem definition: Revenue management in railways distinguishes itself from that in traditional sectors, such as airline, hotel, and fashion retail, in several important ways. (i) Capacity is substantially more flexible in the sense that changes to the capacity of a train can often be made throughout the sales horizon. Consequently, the joint optimization of prices and capacity assumes genuine importance. (ii) Capacity can only be added in discrete “chunks” (i.e., coaches). (iii) Passengers with unreserved tickets can travel in any of the multiple trains available during the day. Further, passengers in unreserved coaches are allowed to travel by standing, thus giving rise to the need to manage congestion. Motivated by our work with a major railway company in Japan, we analyze the problem of jointly optimizing pricing and capacity; this problem is more-general version of the canonical multiproduct dynamic-pricing problem. Methodology/results: Our analysis yields four asymptotically optimal policies. From the viewpoint of the pricing decisions, our policies can be classified into two types—static and dynamic. With respect to the timing of the capacity decisions, our policies are again of two types—fixed capacity and flexible capacity. We establish the convergence rates of these policies; when demand and supply are scaled by a factor [Formula: see text], the optimality gaps of the static policies scale proportional to [Formula: see text], and those of the dynamic policies scale proportional to [Formula: see text]. We illustrate the attractive performance of our policies on a test suite of instances based on real-world operations of the high-speed “Shinkansen” trains in Japan and develop associated insights. Managerial implications: Our work provides railway administrators with simple and effective policies for pricing, capacity, and congestion management. Our policies cater to different contingencies that decision makers may face in practice: the need for static or dynamic prices and for fixed or flexible capacity. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2022.0246 .","PeriodicalId":49901,"journal":{"name":"M&som-Manufacturing & Service Operations Management","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135816109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: Guided delegation, a practice in which companies provide guidelines when delegating supplier management to tier 1 firms, is a common practice in managing complex supply chains. We study the benefits and risks of this approach in a three-tier supply chain setting consisting of a buying firm, a tier 1 contract manufacturer, and tier 2 suppliers, where responsibility risk stems from the tier 2 suppliers. We analyze the buyer’s profit and the supply chain’s responsibility risk under a guided-delegation model, in which the buyer specifies an authorized tier 2 supplier for the tier 1 manufacturer as a guideline and may audit the tier 1 firm for compliance. We compare this model with a full-delegation model, in which the buyer fully delegates tier 2 supplier selection to the tier 1 firm. Methodology/results: We formulate a Stackelberg game for the buyer’s contract design problem. We show that guided delegation does not always yield its expected results of lower supply chain risk and greater buyer profit. In instances where compliance auditing is financially more attractive to the buyer than paying a premium for tier 1’s responsible sourcing, guided delegation can result in higher profit but increased risk because compliance auditing cannot completely eliminate the risk. Additionally, when tier 1 anticipates buyer audits and demands higher wholesale prices to offset potential penalties, guided delegation may lead to decreased risk but lower profit for the buyer. Our analysis shows that delegating audit responsibilities to tier 1 often does not improve the buyer’s profit, and in situations where it does, it invariably raises the supply chain risk. Managerial implications: Our research identifies the potential downsides of guided delegation, offering insight for external stakeholders on where to focus efforts to avoid these pitfalls. It suggests that the intended benefits of guided delegation can only be realized when paired with compliance auditing. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2020.0446 .
{"title":"Effect of Guided Delegation and Information Proximity on Multitier Responsible Sourcing","authors":"Sammi Y. Tang, Jing-Sheng Song","doi":"10.1287/msom.2020.0446","DOIUrl":"https://doi.org/10.1287/msom.2020.0446","url":null,"abstract":"Problem definition: Guided delegation, a practice in which companies provide guidelines when delegating supplier management to tier 1 firms, is a common practice in managing complex supply chains. We study the benefits and risks of this approach in a three-tier supply chain setting consisting of a buying firm, a tier 1 contract manufacturer, and tier 2 suppliers, where responsibility risk stems from the tier 2 suppliers. We analyze the buyer’s profit and the supply chain’s responsibility risk under a guided-delegation model, in which the buyer specifies an authorized tier 2 supplier for the tier 1 manufacturer as a guideline and may audit the tier 1 firm for compliance. We compare this model with a full-delegation model, in which the buyer fully delegates tier 2 supplier selection to the tier 1 firm. Methodology/results: We formulate a Stackelberg game for the buyer’s contract design problem. We show that guided delegation does not always yield its expected results of lower supply chain risk and greater buyer profit. In instances where compliance auditing is financially more attractive to the buyer than paying a premium for tier 1’s responsible sourcing, guided delegation can result in higher profit but increased risk because compliance auditing cannot completely eliminate the risk. Additionally, when tier 1 anticipates buyer audits and demands higher wholesale prices to offset potential penalties, guided delegation may lead to decreased risk but lower profit for the buyer. Our analysis shows that delegating audit responsibilities to tier 1 often does not improve the buyer’s profit, and in situations where it does, it invariably raises the supply chain risk. Managerial implications: Our research identifies the potential downsides of guided delegation, offering insight for external stakeholders on where to focus efforts to avoid these pitfalls. It suggests that the intended benefits of guided delegation can only be realized when paired with compliance auditing. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2020.0446 .","PeriodicalId":49901,"journal":{"name":"M&som-Manufacturing & Service Operations Management","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135013974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Warut Khern-am-nuai, Hyunji So, Maxime C. Cohen, Yossiri Adulyasak
Problem definition: Restaurant review platforms, such as Yelp and TripAdvisor, routinely receive large numbers of photos in their review submissions. These photos provide significant value for users who seek to compare restaurants. In this context, the choice of cover images (i.e., representative photos of the restaurants) can greatly influence the level of user engagement on the platform. Unfortunately, selecting these images can be time consuming and often requires human intervention. At the same time, it is challenging to develop a systematic approach to assess the effectiveness of the selected images. Methodology/results: In this paper, we collaborate with a large review platform in Asia to investigate this problem. We discuss two image selection approaches, namely crowd-based and artificial intelligence (AI)-based systems. The AI-based system we use learns complex latent image features, which are further enhanced by transfer learning to overcome the scarcity of labeled data. We collaborate with the platform to deploy our AI-based system through a randomized field experiment to carefully compare both systems. We find that the AI-based system outperforms the crowd-based counterpart and boosts user engagement by 12.43%–16.05% on average. We then conduct empirical analyses on observational data to identify the underlying mechanisms that drive the superior performance of the AI-based system. Managerial implications: Finally, we infer from our findings that the AI-based system outperforms the crowd-based system for restaurants with (i) a longer tenure on the platform, (ii) a limited number of user-generated photos, (iii) a lower star rating, and (iv) lower user engagement during the crowd-based system. Funding: The authors acknowledge financial support from the Social Sciences and Humanities Research Council [Grant 430-2020-00106]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2021.0531 .
{"title":"Selecting Cover Images for Restaurant Reviews: AI vs. Wisdom of the Crowd","authors":"Warut Khern-am-nuai, Hyunji So, Maxime C. Cohen, Yossiri Adulyasak","doi":"10.1287/msom.2021.0531","DOIUrl":"https://doi.org/10.1287/msom.2021.0531","url":null,"abstract":"Problem definition: Restaurant review platforms, such as Yelp and TripAdvisor, routinely receive large numbers of photos in their review submissions. These photos provide significant value for users who seek to compare restaurants. In this context, the choice of cover images (i.e., representative photos of the restaurants) can greatly influence the level of user engagement on the platform. Unfortunately, selecting these images can be time consuming and often requires human intervention. At the same time, it is challenging to develop a systematic approach to assess the effectiveness of the selected images. Methodology/results: In this paper, we collaborate with a large review platform in Asia to investigate this problem. We discuss two image selection approaches, namely crowd-based and artificial intelligence (AI)-based systems. The AI-based system we use learns complex latent image features, which are further enhanced by transfer learning to overcome the scarcity of labeled data. We collaborate with the platform to deploy our AI-based system through a randomized field experiment to carefully compare both systems. We find that the AI-based system outperforms the crowd-based counterpart and boosts user engagement by 12.43%–16.05% on average. We then conduct empirical analyses on observational data to identify the underlying mechanisms that drive the superior performance of the AI-based system. Managerial implications: Finally, we infer from our findings that the AI-based system outperforms the crowd-based system for restaurants with (i) a longer tenure on the platform, (ii) a limited number of user-generated photos, (iii) a lower star rating, and (iv) lower user engagement during the crowd-based system. Funding: The authors acknowledge financial support from the Social Sciences and Humanities Research Council [Grant 430-2020-00106]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2021.0531 .","PeriodicalId":49901,"journal":{"name":"M&som-Manufacturing & Service Operations Management","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134997479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: In a Markovian queueing system with strategic customers, a reward is gained from completing service, and a loss is incurred while waiting to be served. The common assumption in the queueing literature is that such loss is a function of the customer’s waiting time. This paper takes a different and novel approach in that it models the customer’s loss incurred because of negative network effects while waiting with others, which increases as the exposure to others increases. Methodology: Waiting time is complemented by two innovative measures that capture negative effects on a tagged customer joining an M/M/c queue: the total number of customers the tagged customer meets and person-time exposure to these customers while waiting to be served. Threshold joining strategies inducing M/M/c/n–type queues are studied in this context. Results: The distributions of exposure size and exposure time of a customer joining the queue at a given position are analytically derived. Equilibria under conditions of no reneging are identified as threshold strategies. If the customer’s loss function is concave (such as an exponential model for the chance of infection during a pandemic), there is an equilibrium threshold strategy under which customers do not renege from the queue, even if reneging is allowed. The price of anarchy caused by lack of coordination among the individuals acting is identified. Unlike the equilibrium threshold built under the restrictive assumption that all potential customers have the same utility function, a novel safe threshold concept is introduced, a queue size at which a customer who joins the facility and stays until completing service has positive expected utility regardless of the actions of the other customers. Managerial implications: The implications of negative network effects caused by congestion in a queueing system are of interest to queue managers and, in particular, affect the optimal size of the waiting area. Safe and equilibrium thresholds are contrasted with the socially optimal threshold set by a regulator, and the safe threshold is suggested as a managerial tool to design the waiting room size. Funding: This work was supported by the Israel Science Foundation [Grants ISF 1898/21 and ISF 852/22]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.1223 .
{"title":"Queueing with Negative Network Effects","authors":"Refael Hassin, Isaac Meilijson, Yael Perlman","doi":"10.1287/msom.2023.1223","DOIUrl":"https://doi.org/10.1287/msom.2023.1223","url":null,"abstract":"Problem definition: In a Markovian queueing system with strategic customers, a reward is gained from completing service, and a loss is incurred while waiting to be served. The common assumption in the queueing literature is that such loss is a function of the customer’s waiting time. This paper takes a different and novel approach in that it models the customer’s loss incurred because of negative network effects while waiting with others, which increases as the exposure to others increases. Methodology: Waiting time is complemented by two innovative measures that capture negative effects on a tagged customer joining an M/M/c queue: the total number of customers the tagged customer meets and person-time exposure to these customers while waiting to be served. Threshold joining strategies inducing M/M/c/n–type queues are studied in this context. Results: The distributions of exposure size and exposure time of a customer joining the queue at a given position are analytically derived. Equilibria under conditions of no reneging are identified as threshold strategies. If the customer’s loss function is concave (such as an exponential model for the chance of infection during a pandemic), there is an equilibrium threshold strategy under which customers do not renege from the queue, even if reneging is allowed. The price of anarchy caused by lack of coordination among the individuals acting is identified. Unlike the equilibrium threshold built under the restrictive assumption that all potential customers have the same utility function, a novel safe threshold concept is introduced, a queue size at which a customer who joins the facility and stays until completing service has positive expected utility regardless of the actions of the other customers. Managerial implications: The implications of negative network effects caused by congestion in a queueing system are of interest to queue managers and, in particular, affect the optimal size of the waiting area. Safe and equilibrium thresholds are contrasted with the socially optimal threshold set by a regulator, and the safe threshold is suggested as a managerial tool to design the waiting room size. Funding: This work was supported by the Israel Science Foundation [Grants ISF 1898/21 and ISF 852/22]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.1223 .","PeriodicalId":49901,"journal":{"name":"M&som-Manufacturing & Service Operations Management","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136034793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Problem definition: We study a dynamic finishing-stage planning problem of a pork producer who at the beginning of each week gets to see how many market-ready hogs she has available for sale and the current market prices. Then, she must decide which hogs to sell to a meatpacker and on the open market and which hogs to hold until the following week. The farmer is contracted to deliver a fixed quantity of hogs to the meatpacker each week priced according to a contractually predetermined market index. If the farmer underdelivers to the meatpacker, she pays a contractually predetermined unit penalty also linked to a market index. Biosecurity protocols prevent the farmer from buying hogs on the open market and selling them to the meatpacker. The farmer can, however, use the open market to sell hogs for prevailing market prices. Methodology/Results: We treat the problem as a dynamic, multiitem, nonstationary inventory problem with multiple sources of uncertainty. The optimal policy is a threshold policy with multiple price-dependent thresholds. The computational complexity required to evaluate the thresholds is the biggest impediment to using the optimal policy as a decision-support tool. So, we utilize an approximate dynamic programming approach that exploits the optimal policy structure and produces a sharp heuristic that is easy to implement. Managerial implications: Numerical experiments calibrated to a pork producer’s data reveal that the optimal policy with the heuristically estimated thresholds substantially improves the existing practice (around 25% on average). The success of the proposed model is attributed to recognizing the value of holding underweight hogs and effectively hedging supply uncertainty and future prices—an insight missed in the planning actions of the current practice. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.1216 .
{"title":"Managing Operations of a Hog Farm Facing Volatile Markets: Inventory and Selling Strategies","authors":"Panos Kouvelis, Ye Liu, Yunzhe Qiu, Danko Turcic","doi":"10.1287/msom.2023.1216","DOIUrl":"https://doi.org/10.1287/msom.2023.1216","url":null,"abstract":"Problem definition: We study a dynamic finishing-stage planning problem of a pork producer who at the beginning of each week gets to see how many market-ready hogs she has available for sale and the current market prices. Then, she must decide which hogs to sell to a meatpacker and on the open market and which hogs to hold until the following week. The farmer is contracted to deliver a fixed quantity of hogs to the meatpacker each week priced according to a contractually predetermined market index. If the farmer underdelivers to the meatpacker, she pays a contractually predetermined unit penalty also linked to a market index. Biosecurity protocols prevent the farmer from buying hogs on the open market and selling them to the meatpacker. The farmer can, however, use the open market to sell hogs for prevailing market prices. Methodology/Results: We treat the problem as a dynamic, multiitem, nonstationary inventory problem with multiple sources of uncertainty. The optimal policy is a threshold policy with multiple price-dependent thresholds. The computational complexity required to evaluate the thresholds is the biggest impediment to using the optimal policy as a decision-support tool. So, we utilize an approximate dynamic programming approach that exploits the optimal policy structure and produces a sharp heuristic that is easy to implement. Managerial implications: Numerical experiments calibrated to a pork producer’s data reveal that the optimal policy with the heuristically estimated thresholds substantially improves the existing practice (around 25% on average). The success of the proposed model is attributed to recognizing the value of holding underweight hogs and effectively hedging supply uncertainty and future prices—an insight missed in the planning actions of the current practice. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.1216 .","PeriodicalId":49901,"journal":{"name":"M&som-Manufacturing & Service Operations Management","volume":"430 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135989100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}