Vladimir Kovtun, Avi Giloni, Clifford Hurvich, Noam Shamir
In this paper we suggest a novel mechanism for information sharing that allows a retailer to control the amount of shared information, and thus to limit information leakage, while still assisting the supplier to make better‐informed decisions and improve the overall efficiency of the supply chain. The control of the amount of leaked information facilitates information sharing because, absent such control, a retailer may refrain from sharing information due to the concern of information leakage. Specifically, we analyze a supply chain in which a retailer observes Autoregressive Moving Average (ARMA) demand for a single product where all players use the myopic order‐up‐to policy for determining their orders. We introduce a new class of information sharing arrangements, coined partial‐information shock (PaIS) sharing. This new class of information sharing agreements extends the previously studied mechanisms of demand sharing and full‐information shock sharing. We demonstrate that the retailer can construct a PaIS sharing arrangement that allows for an intermediate level of information sharing while simultaneously controlling the amount of leakage. We characterize when one PaIS arrangement will be more valuable to the supplier than another. We conclude with a numerical study that highlights that there does not necessarily need to be a tradeoff between the supplier having a better forecast and the retailer experiencing a higher level of leakage.
在本文中,我们提出了一种新的信息共享机制,它允许零售商控制共享信息的数量,从而限制信息泄漏,同时还能帮助供应商做出更明智的决策,提高供应链的整体效率。对泄漏信息量的控制有利于信息共享,因为如果没有这种控制,零售商可能会因为担心信息泄漏而不共享信息。具体来说,我们分析了一个供应链,在这个供应链中,零售商观察到单一产品的自回归移动平均(ARMA)需求,所有参与者都使用近视的 "从订单到订单"(order-up-to)策略来决定他们的订单。我们引入了一类新的信息共享安排,称为部分信息冲击(PaIS)共享。这一类新的信息共享协议扩展了之前研究过的需求共享和完全信息冲击共享机制。我们证明,零售商可以构建一种 PaIS 共享安排,在控制信息泄露量的同时,实现中间水平的信息共享。我们描述了何时一种 PaIS 安排比另一种安排对供应商更有价值。最后,我们通过数值研究强调,在供应商获得更好的预测和零售商经历更高水平的泄漏之间,并不一定需要权衡利弊。
{"title":"Partial information sharing in supply chains with ARMA demand","authors":"Vladimir Kovtun, Avi Giloni, Clifford Hurvich, Noam Shamir","doi":"10.1002/nav.22227","DOIUrl":"https://doi.org/10.1002/nav.22227","url":null,"abstract":"In this paper we suggest a novel mechanism for information sharing that allows a retailer to control the amount of shared information, and thus to limit information leakage, while still assisting the supplier to make better‐informed decisions and improve the overall efficiency of the supply chain. The control of the amount of leaked information facilitates information sharing because, absent such control, a retailer may refrain from sharing information due to the concern of information leakage. Specifically, we analyze a supply chain in which a retailer observes Autoregressive Moving Average (ARMA) demand for a single product where all players use the myopic order‐up‐to policy for determining their orders. We introduce a new class of information sharing arrangements, coined partial‐information shock (PaIS) sharing. This new class of information sharing agreements extends the previously studied mechanisms of demand sharing and full‐information shock sharing. We demonstrate that the retailer can construct a PaIS sharing arrangement that allows for an intermediate level of information sharing while simultaneously controlling the amount of leakage. We characterize when one PaIS arrangement will be more valuable to the supplier than another. We conclude with a numerical study that highlights that there does not necessarily need to be a tradeoff between the supplier having a better forecast and the retailer experiencing a higher level of leakage.","PeriodicalId":49772,"journal":{"name":"Naval Research Logistics","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fast and reliable remaining useful life (RUL) prediction plays a critical role in prognostic and health management of industrial assets. Due to advances in data‐collecting techniques, RUL prediction based on the degradation data has attracted considerable attention during the past decade. In the literature, the majority of studies have focused on RUL prediction using the Wiener process as the underlying degradation model. On the other hand, when the degradation path is monotone, the inverse Gaussian (IG) process has been shown as a popular alternative to the Wiener process. Despite the importance of IG process in degradation modeling, however, there remains a paucity of studies on the RUL prediction based on the IG process. Therefore, the principal objective of this study is to provide a systematic analysis of the RUL prediction based on the IG process. We first propose a series of novel online estimation algorithms so that the model parameters can be efficiently updated whenever a new collection of degradation measurements is available. The distribution of RUL is then derived, which could also be recursively updated. In view of the possible heterogeneities among different systems, we further extend the proposed online algorithms to the IG random‐effect model. Numerical studies and asymptotic analysis show that both the parameters and the RUL can be efficiently and credibly estimated by the proposed algorithms. At last, two real degradation datasets are used for illustration.
{"title":"Efficient online estimation and remaining useful life prediction based on the inverse Gaussian process","authors":"Ancha Xu, Jingyang Wang, Yincai Tang, Piao Chen","doi":"10.1002/nav.22226","DOIUrl":"https://doi.org/10.1002/nav.22226","url":null,"abstract":"Fast and reliable remaining useful life (RUL) prediction plays a critical role in prognostic and health management of industrial assets. Due to advances in data‐collecting techniques, RUL prediction based on the degradation data has attracted considerable attention during the past decade. In the literature, the majority of studies have focused on RUL prediction using the Wiener process as the underlying degradation model. On the other hand, when the degradation path is monotone, the inverse Gaussian (IG) process has been shown as a popular alternative to the Wiener process. Despite the importance of IG process in degradation modeling, however, there remains a paucity of studies on the RUL prediction based on the IG process. Therefore, the principal objective of this study is to provide a systematic analysis of the RUL prediction based on the IG process. We first propose a series of novel online estimation algorithms so that the model parameters can be efficiently updated whenever a new collection of degradation measurements is available. The distribution of RUL is then derived, which could also be recursively updated. In view of the possible heterogeneities among different systems, we further extend the proposed online algorithms to the IG random‐effect model. Numerical studies and asymptotic analysis show that both the parameters and the RUL can be efficiently and credibly estimated by the proposed algorithms. At last, two real degradation datasets are used for illustration.","PeriodicalId":49772,"journal":{"name":"Naval Research Logistics","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a double‐sided queueing model with batch Markovian arrival processes (BMAPs) and finite discrete abandonment times, which arises in various stochastic systems such as perishable inventory systems and financial markets. Customers arrive at the system with a batch of orders to be matched by counterparts. While waiting to be matched, customers become impatient and may abandon the system without service. The abandonment time of a customer depends on its batch size and its position in the queue. First, we propose an approach to obtain the stationary joint distribution of age processes via the stationary analysis of a multi‐layer Markov modulated fluid flow process. Second, using the stationary joint distribution of the age processes, we derive a number of queueing quantities related to matching rates, fill rates, sojourn times and queue length for both sides of the system. Last, we apply our model to analyze a vaccine inventory system and gain insight into the effect of uncertainty in supply and demand processes on the performance of the inventory system. It is observed that BMAPs are better choices for modeling the supply/demand process in systems with high uncertainty for more accurate performance quantities.
{"title":"Double‐sided queues and their applications to vaccine inventory management","authors":"Haoran Wu, Qi‐Ming He, Fatih Safa Erenay","doi":"10.1002/nav.22224","DOIUrl":"https://doi.org/10.1002/nav.22224","url":null,"abstract":"We consider a double‐sided queueing model with batch Markovian arrival processes (BMAPs) and finite discrete abandonment times, which arises in various stochastic systems such as perishable inventory systems and financial markets. Customers arrive at the system with a batch of orders to be matched by counterparts. While waiting to be matched, customers become impatient and may abandon the system without service. The abandonment time of a customer depends on its batch size and its position in the queue. First, we propose an approach to obtain the stationary joint distribution of age processes via the stationary analysis of a multi‐layer Markov modulated fluid flow process. Second, using the stationary joint distribution of the age processes, we derive a number of queueing quantities related to matching rates, fill rates, sojourn times and queue length for both sides of the system. Last, we apply our model to analyze a vaccine inventory system and gain insight into the effect of uncertainty in supply and demand processes on the performance of the inventory system. It is observed that BMAPs are better choices for modeling the supply/demand process in systems with high uncertainty for more accurate performance quantities.","PeriodicalId":49772,"journal":{"name":"Naval Research Logistics","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unexpected failures of safety‐critical systems during mission execution are not desirable in that they often result in severe safety hazards and significant financial losses. Prompt mission abort based on real‐time degradation data is an effective means to prevent such failures and enhance system safety. In this study, we focus on safety‐critical systems that experience cumulative shock degradation and fails when the degradation exceeds a failure threshold. Real‐time degradation measurements are obtained via sensor monitoring, which are stochastically related to the hidden degradation parameters that vary across components. We formulate the optimal mission risk control problem as a sequential abort decision‐making problem that integrates adaptive parameter learning, following which a dynamic Bayesian learning approach is exploited to sequentially infer the uncertain degradation parameters by utilizing real‐time sensor data. The problem is constituted as a finite horizon Markov decision process to minimize the expected costs associated with inspections, mission failures and system failures. We derive a series of structural properties of the value function and demonstrate the existence of optimal abort thresholds. In particular, we establish that the optimal policy follows a state‐dependent control limit policy. Additionally, we study the existence and monotonicity of control limits associated with both the number of inspections and degradation severities. We demonstrate the performance of the proposed risk management policy through comparative experiments that show substantial superiorities over risk‐induced loss control.
{"title":"Optimal condition‐based parameter learning and mission abort decisions","authors":"Li Yang, Yuhan Ma, Fanping Wei, Qingan Qiu","doi":"10.1002/nav.22225","DOIUrl":"https://doi.org/10.1002/nav.22225","url":null,"abstract":"Unexpected failures of safety‐critical systems during mission execution are not desirable in that they often result in severe safety hazards and significant financial losses. Prompt mission abort based on real‐time degradation data is an effective means to prevent such failures and enhance system safety. In this study, we focus on safety‐critical systems that experience cumulative shock degradation and fails when the degradation exceeds a failure threshold. Real‐time degradation measurements are obtained via sensor monitoring, which are stochastically related to the hidden degradation parameters that vary across components. We formulate the optimal mission risk control problem as a sequential abort decision‐making problem that integrates adaptive parameter learning, following which a dynamic Bayesian learning approach is exploited to sequentially infer the uncertain degradation parameters by utilizing real‐time sensor data. The problem is constituted as a finite horizon Markov decision process to minimize the expected costs associated with inspections, mission failures and system failures. We derive a series of structural properties of the value function and demonstrate the existence of optimal abort thresholds. In particular, we establish that the optimal policy follows a state‐dependent control limit policy. Additionally, we study the existence and monotonicity of control limits associated with both the number of inspections and degradation severities. We demonstrate the performance of the proposed risk management policy through comparative experiments that show substantial superiorities over risk‐induced loss control.","PeriodicalId":49772,"journal":{"name":"Naval Research Logistics","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yao‐Wen Sang, Jun‐Qiang Wang, Małgorzata Sterna, Jacek Błażewicz
We study a single machine scheduling problem with the total weighted late work and the total rejection cost. The late work of a job is the part of this job executed after its due date, and the rejection cost of a job is the fee of rejecting to process it. We consider a Pareto scheduling and two restricted scheduling problems. The Pareto scheduling problem aims to find all non‐dominated values of the total weighted late work and the total rejection cost. The restricted scheduling problems are dedicated to minimizing the total weighted late work with the total rejection cost not greater than a given threshold, and minimizing the total rejection cost with the total weighted late work not greater than a given threshold, respectively. For the Pareto scheduling problem, we prove that it is binary NP‐hard by providing a pseudo‐polynomial time algorithm, and give a fully polynomial time approximation scheme (FPTAS). For the restricted scheduling problems, we prove that there are no FPTASes unless , which answers an open problem. Moreover, we develop relaxed FPTASes for these two restricted scheduling problems.
{"title":"Single machine scheduling with the total weighted late work and rejection cost","authors":"Yao‐Wen Sang, Jun‐Qiang Wang, Małgorzata Sterna, Jacek Błażewicz","doi":"10.1002/nav.22222","DOIUrl":"https://doi.org/10.1002/nav.22222","url":null,"abstract":"We study a single machine scheduling problem with the total weighted late work and the total rejection cost. The late work of a job is the part of this job executed after its due date, and the rejection cost of a job is the fee of rejecting to process it. We consider a Pareto scheduling and two restricted scheduling problems. The Pareto scheduling problem aims to find all non‐dominated values of the total weighted late work and the total rejection cost. The restricted scheduling problems are dedicated to minimizing the total weighted late work with the total rejection cost not greater than a given threshold, and minimizing the total rejection cost with the total weighted late work not greater than a given threshold, respectively. For the Pareto scheduling problem, we prove that it is binary NP‐hard by providing a pseudo‐polynomial time algorithm, and give a fully polynomial time approximation scheme (FPTAS). For the restricted scheduling problems, we prove that there are no FPTASes unless , which answers an open problem. Moreover, we develop relaxed FPTASes for these two restricted scheduling problems.","PeriodicalId":49772,"journal":{"name":"Naval Research Logistics","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tao Sun, Jun‐Qiang Wang, Guo‐Qiang Fan, Zhixin Liu
This article studies a submodular batch scheduling problem motivated by the vacuum heat treatment. The batch processing time is formulated by a monotone nondecreasing submodular function characterized by decreasing marginal gain property. The objective is to minimize the makespan. We show the NP‐hardness of the problem on a single machine and of finding a polynomial‐time approximation algorithm with the worst‐case performance ratio strictly less than for the problem on parallel machines. We introduce a bounded interval to model the batch processing time using two parameters, that is, the total curvature and the quantization indicator. Based on the decreasing marginal gain property and the two parameters, we make a systematic analysis of the full batch longest processing time algorithm and the longest processing time greedy algorithm, and propose the instances with the bound of batch capacity for these two algorithms for the submodular batch scheduling problem. Moreover, we prove the submodularity of batch processing time function of the existing batch models including the parallel batch, serial batch, and mixed batch models. We compare the worst‐case performance ratios in the existing batch models with those deduced from our work in the submodular batch model. In most situations, the worst‐case performance ratios deduced from our work are comparable to the best‐known worst‐case performance ratios with tailored examinations.
{"title":"Submodular batch scheduling on parallel machines","authors":"Tao Sun, Jun‐Qiang Wang, Guo‐Qiang Fan, Zhixin Liu","doi":"10.1002/nav.22221","DOIUrl":"https://doi.org/10.1002/nav.22221","url":null,"abstract":"This article studies a submodular batch scheduling problem motivated by the vacuum heat treatment. The batch processing time is formulated by a monotone nondecreasing submodular function characterized by decreasing marginal gain property. The objective is to minimize the makespan. We show the NP‐hardness of the problem on a single machine and of finding a polynomial‐time approximation algorithm with the worst‐case performance ratio strictly less than for the problem on parallel machines. We introduce a bounded interval to model the batch processing time using two parameters, that is, the total curvature and the quantization indicator. Based on the decreasing marginal gain property and the two parameters, we make a systematic analysis of the full batch longest processing time algorithm and the longest processing time greedy algorithm, and propose the instances with the bound of batch capacity for these two algorithms for the submodular batch scheduling problem. Moreover, we prove the submodularity of batch processing time function of the existing batch models including the parallel batch, serial batch, and mixed batch models. We compare the worst‐case performance ratios in the existing batch models with those deduced from our work in the submodular batch model. In most situations, the worst‐case performance ratios deduced from our work are comparable to the best‐known worst‐case performance ratios with tailored examinations.","PeriodicalId":49772,"journal":{"name":"Naval Research Logistics","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142199715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Danna Chen, Ying Zhu, Xiaogang Lin, Qiang Lin, Ying‐Ju Chen
In practice, suppliers sell their products through online intermediaries who sell them to customers (reselling) or directly access customers via intermediaries by paying a proportional fee (agency selling). Unlike giant intermediaries, these suppliers have smaller scales and are more risk‐averse. Motivated by practical examples, this paper studies these intermediaries' incentives for vertical demand information sharing with their suppliers. We develop a game‐theoretic model to consider a hybrid e‐commerce supply chain with a risk‐neutral intermediary and two risk‐averse suppliers, where one supplier (agency supplier) adopts agency selling while the other supplier (reselling supplier) employs reselling. As a benchmark, we show that it is beneficial for the intermediary to share all (no) information with both risk‐neutral suppliers if the proportional fee is relatively high (low). However, we find that suppliers' risk aversion is a key factor leading to supply chain members' pricing decisions being influenced by the precision of the demand information. This influence impacts the double marginalization effect and further changes the intermediary's information‐sharing decisions. Specifically, the intermediary should disclose part rather than all of its information to both risk‐averse suppliers if the proportional fee is high (intermediate) in a weakly (highly) competitive market environment. Finally, when the reselling supplier's sensitivity to risk is sufficiently high (low) relative to the agency supplier's sensitivity to risk, we observe that the intermediary is less (more) willing to share information.
{"title":"Impact of suppliers' risk aversions on information sharing in a hybrid E‐commerce supply chain","authors":"Danna Chen, Ying Zhu, Xiaogang Lin, Qiang Lin, Ying‐Ju Chen","doi":"10.1002/nav.22216","DOIUrl":"https://doi.org/10.1002/nav.22216","url":null,"abstract":"In practice, suppliers sell their products through online intermediaries who sell them to customers (reselling) or directly access customers via intermediaries by paying a proportional fee (agency selling). Unlike giant intermediaries, these suppliers have smaller scales and are more risk‐averse. Motivated by practical examples, this paper studies these intermediaries' incentives for vertical demand information sharing with their suppliers. We develop a game‐theoretic model to consider a hybrid e‐commerce supply chain with a risk‐neutral intermediary and two risk‐averse suppliers, where one supplier (agency supplier) adopts agency selling while the other supplier (reselling supplier) employs reselling. As a benchmark, we show that it is beneficial for the intermediary to share <jats:italic>all</jats:italic> (<jats:italic>no</jats:italic>) information with both risk‐neutral suppliers if the proportional fee is relatively high (low). However, we find that suppliers' risk aversion is a key factor leading to supply chain members' pricing decisions being influenced by the precision of the demand information. This influence impacts the double marginalization effect and further changes the intermediary's information‐sharing decisions. Specifically, the intermediary should disclose <jats:italic>part</jats:italic> rather than <jats:italic>all</jats:italic> of its information to both risk‐averse suppliers if the proportional fee is high (intermediate) in a weakly (highly) competitive market environment. Finally, when the reselling supplier's sensitivity to risk is sufficiently high (low) relative to the agency supplier's sensitivity to risk, we observe that the intermediary is less (more) willing to share information.","PeriodicalId":49772,"journal":{"name":"Naval Research Logistics","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141872171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a deterministic dynamic pricing problem for a product that exhibits network effects and that is sold to a fixed heterogeneous population of customers. We begin by introducing a demand model wherein those customers are arrayed over two‐dimensional space according to a bivariate probability distribution. Each customer's location in space provides a description of that customer's intrinsic value for the product as well as the extent to which the customer is influenced by the network effect. In the pricing problem, as sales accumulate over time, the set of customers who have already purchased the product grows, while the set of customers who have not yet purchased the product shrinks. The total customer population remains fixed. Those who have not yet purchased constitute the remaining population of potential buyers of the product. As time moves forward, the mix of customers that remain as potential buyers evolves endogenously. The demand model yields a geometric interpretation of the remaining population of potential buyers, and gives rise to a dynamic program with states that are sets in two‐dimensional space. It is not practical to solve the dynamic pricing problem to optimality, so we present bounds and comparative statics results that help us identify tractable heuristics and obtain rigorous performance guarantees. In numerical experiments, we find that fixed‐price policies may perform poorly, especially when the network effect is strong or the time horizon is long. We also introduce a stochastic version of the problem that uses a spatial Poisson process to describe the customers, and we develop and analyze a heuristic approach for that formulation.
{"title":"Pricing a product with network effects for sale to a fixed population of customers","authors":"Tongqing Chen, William L. Cooper","doi":"10.1002/nav.22219","DOIUrl":"https://doi.org/10.1002/nav.22219","url":null,"abstract":"We consider a deterministic dynamic pricing problem for a product that exhibits network effects and that is sold to a fixed heterogeneous population of customers. We begin by introducing a demand model wherein those customers are arrayed over two‐dimensional space according to a bivariate probability distribution. Each customer's location in space provides a description of that customer's intrinsic value for the product as well as the extent to which the customer is influenced by the network effect. In the pricing problem, as sales accumulate over time, the set of customers who have already purchased the product grows, while the set of customers who have not yet purchased the product shrinks. The total customer population remains fixed. Those who have not yet purchased constitute the remaining population of potential buyers of the product. As time moves forward, the mix of customers that remain as potential buyers evolves endogenously. The demand model yields a geometric interpretation of the remaining population of potential buyers, and gives rise to a dynamic program with states that are sets in two‐dimensional space. It is not practical to solve the dynamic pricing problem to optimality, so we present bounds and comparative statics results that help us identify tractable heuristics and obtain rigorous performance guarantees. In numerical experiments, we find that fixed‐price policies may perform poorly, especially when the network effect is strong or the time horizon is long. We also introduce a stochastic version of the problem that uses a spatial Poisson process to describe the customers, and we develop and analyze a heuristic approach for that formulation.","PeriodicalId":49772,"journal":{"name":"Naval Research Logistics","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141872212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advancement of online‐ordering technology, an increasing number of service providers are transforming to sell services through online channels alongside traditional offline stores. Our paper studies this emerging business model (also known as omnichannel services) in which customers can choose between online and offline ordering. We develop a queueing‐game‐theoretic model to evaluate the performance of omnichannel systems in terms of throughput and social welfare when customers strategically choose their ordering channels in the presence of service valuation uncertainty. We also contrast omnichannel services with single‐channel services, that is, online‐only and offline‐only services. Our analysis yields the following main insights. First, although customers run the risk of making suboptimal decisions and suffering from unexpected losses when they are uncertain about their service valuations, we find that all customers would be better off in expectation with an increasing level of service valuation uncertainty. Second, while social welfare improves as ordering online becomes more convenient in some cases, it (even online customer welfare) can also be worse off in others, especially when the system operates under heavy congestion, because customers' self‐interested channel choice behavior would impose significant negative congestion externalities among customers. Third, despite the fact that omnichannel services provide customers with an additional, more convenient ordering channel option in comparison with conventional offline‐only services, we find that, somewhat surprisingly, adopting omnichannel services does not necessarily guarantee improvement in social welfare. Finally, we discuss two alternative modeling assumptions to demonstrate the robustness of our main insights.
{"title":"Omnichannel services in the presence of customers' valuation uncertainty","authors":"Huan Liu, Ping Cao, Yaolei Wang","doi":"10.1002/nav.22213","DOIUrl":"https://doi.org/10.1002/nav.22213","url":null,"abstract":"With the advancement of online‐ordering technology, an increasing number of service providers are transforming to sell services through online channels alongside traditional offline stores. Our paper studies this emerging business model (also known as omnichannel services) in which customers can choose between online and offline ordering. We develop a queueing‐game‐theoretic model to evaluate the performance of omnichannel systems in terms of throughput and social welfare when customers strategically choose their ordering channels in the presence of service valuation uncertainty. We also contrast omnichannel services with single‐channel services, that is, online‐only and offline‐only services. Our analysis yields the following main insights. First, although customers run the risk of making suboptimal decisions and suffering from unexpected losses when they are uncertain about their service valuations, we find that all customers would be better off in expectation with an increasing level of service valuation uncertainty. Second, while social welfare improves as ordering online becomes more convenient in some cases, it (even online customer welfare) can also be worse off in others, especially when the system operates under heavy congestion, because customers' self‐interested channel choice behavior would impose significant negative congestion externalities among customers. Third, despite the fact that omnichannel services provide customers with an additional, more convenient ordering channel option in comparison with conventional offline‐only services, we find that, somewhat surprisingly, adopting omnichannel services does not necessarily guarantee improvement in social welfare. Finally, we discuss two alternative modeling assumptions to demonstrate the robustness of our main insights.","PeriodicalId":49772,"journal":{"name":"Naval Research Logistics","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141770825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The extant literature has mostly blamed the setup costs and added transportation costs for the failures of relocating procurement centers overseas, whereas internal decentralization has long been ignored. In practice, issues such as the arm's length principle and information asymmetry can prevent the headquarters from fully centralizing its divisions when offshoring its procurement center overseas. To examine the impact of internal decentralization, we investigate the decision of a firm's headquarters regarding whether to set up an overseas procurement center and whether to further decentralize the procurement center. This article investigates a stylized model and reveals that, even without the setup and transportation costs, it is not always beneficial for the firm to offshore procurement centers under the impact of transfer pricing unless the tax advantage is big enough. An offshoring procurement system with a decentralized procurement center can outperform one with a centralized procurement center when the tax rate disparity is large, because the headquarters' procurement cost information screening makes it benefit more from a decentralized procurement center when the tax rate gap is big. Besides the intuitive trade‐off between the tax‐saving effect and the double marginalization effect caused by the internal decentralization, some indirect effects—cost‐saving effect of the procurement effort and the tax‐paying asymmetry effect (i.e., the procurement center still pays a positive tax even if the headquarters is not profitable)—are found to have a significant impact on the headquarters' choice. Screening the procurement cost information amplifies the advantage of the procurement independent accounting system when the tax rate disparity is big but decreases it otherwise.
{"title":"Offshoring procurement systems with internal decentralization","authors":"Zhiqiao Wu, Xueping Zhen, Gangshu (George) Cai, Jiafu Tang","doi":"10.1002/nav.22215","DOIUrl":"https://doi.org/10.1002/nav.22215","url":null,"abstract":"The extant literature has mostly blamed the setup costs and added transportation costs for the failures of relocating procurement centers overseas, whereas internal decentralization has long been ignored. In practice, issues such as the arm's length principle and information asymmetry can prevent the headquarters from fully centralizing its divisions when offshoring its procurement center overseas. To examine the impact of internal decentralization, we investigate the decision of a firm's headquarters regarding whether to set up an overseas procurement center and whether to further decentralize the procurement center. This article investigates a stylized model and reveals that, even without the setup and transportation costs, it is not always beneficial for the firm to offshore procurement centers under the impact of transfer pricing unless the tax advantage is big enough. An offshoring procurement system with a decentralized procurement center can outperform one with a centralized procurement center when the tax rate disparity is large, because the headquarters' procurement cost information screening makes it benefit more from a decentralized procurement center when the tax rate gap is big. Besides the intuitive trade‐off between the tax‐saving effect and the double marginalization effect caused by the internal decentralization, some indirect effects—cost‐saving effect of the procurement effort and the tax‐paying asymmetry effect (i.e., the procurement center still pays a positive tax even if the headquarters is not profitable)—are found to have a significant impact on the headquarters' choice. Screening the procurement cost information amplifies the advantage of the procurement independent accounting system when the tax rate disparity is big but decreases it otherwise.","PeriodicalId":49772,"journal":{"name":"Naval Research Logistics","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2024-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141746277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}