Pub Date : 2026-01-12DOI: 10.1016/j.ejor.2026.01.009
jinpeng wei, xuanhua xu, qiuhan wang, zongrun wang, weiwei guo, francisco Javier
Due to the uncertainty of information, decision-makers within a group often seek to compare themselves with others to determine whether they are being treated fairly, which introduces significant instability into consensus management. To provide a reliable solution, this study aims to achieve fair consensus in uncertain environments. First, fairness concerns are incorporated into the maximum experts consensus model, measuring decision-makers’ fairness utility levels and revealing the relationship between their opinion adjustment behavior and fair consensus. Additionally, to more accurately and objectively characterize the uncertainty of consensus parameters, we use a kernel estimation method based on historical decision data to capture the uncertain features of both costs and opinions separately, thereby analyzing their impact on fair consensus. Robust optimization methods are then employed to mitigate the decision risks associated with these uncertainties, and various robust data-driven consensus models are constructed. These models not only eliminates the decision risks arising from uncertainty, but also addresses the issue of conservative consensus often encountered in traditional experience-driven robust optimization to some extent. We also developed an improved particle swarm optimization algorithm to solve the robust models. Finally, extensive numerical analysis results demonstrate that our approach produces more stable and reliable decision outcomes.
{"title":"A robust data-driven maximum experts consensus modeling approach considering fairness concerns under uncertain contexts","authors":"jinpeng wei, xuanhua xu, qiuhan wang, zongrun wang, weiwei guo, francisco Javier","doi":"10.1016/j.ejor.2026.01.009","DOIUrl":"https://doi.org/10.1016/j.ejor.2026.01.009","url":null,"abstract":"Due to the uncertainty of information, decision-makers within a group often seek to compare themselves with others to determine whether they are being treated fairly, which introduces significant instability into consensus management. To provide a reliable solution, this study aims to achieve fair consensus in uncertain environments. First, fairness concerns are incorporated into the maximum experts consensus model, measuring decision-makers’ fairness utility levels and revealing the relationship between their opinion adjustment behavior and fair consensus. Additionally, to more accurately and objectively characterize the uncertainty of consensus parameters, we use a kernel estimation method based on historical decision data to capture the uncertain features of both costs and opinions separately, thereby analyzing their impact on fair consensus. Robust optimization methods are then employed to mitigate the decision risks associated with these uncertainties, and various robust data-driven consensus models are constructed. These models not only eliminates the decision risks arising from uncertainty, but also addresses the issue of conservative consensus often encountered in traditional experience-driven robust optimization to some extent. We also developed an improved particle swarm optimization algorithm to solve the robust models. Finally, extensive numerical analysis results demonstrate that our approach produces more stable and reliable decision outcomes.","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"57 1","pages":""},"PeriodicalIF":6.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.ejor.2026.01.012
Ben Lowery, Anna-Lena Sachs, Idris A. Eckley, Louise Lloyd
We investigate the management of stock for a business with integrated online and offline store-fronts selling products facing uncertainty in demand. The integration of channels includes an opportunity for customers to have items sent directly to their home in case of a store stockout. We model a two-echelon divergent, periodic-review inventory model, with partial lost-sales at the store level and an online demand channel. The problem is developed as a Stochastic Dynamic Program minimising inventory costs. For the zero lead-time case, we prove desirable properties and develop ordering decisions based on optimality of a base-stock policy. For positive lead-time, we highlight the effectiveness of adding order caps to reduce system costs. In an extensive numerical study, we improve standard heuristic methods in the literature on costs by up to 19%. Further, we apply methods to real life data for a large mobile phone retailer, Tesco Mobile, with our methods outperforming the internal benchmark method. We show how the company’s target service level can be reached, with a reduction of inventory between 75% and 99% at the store level. By focusing on effective yet interpretable policies, we suggest methods that can be used to aid a decision maker in a practical context.
{"title":"Periodic review inventory control for an omnichannel retailer with partial lost-sales","authors":"Ben Lowery, Anna-Lena Sachs, Idris A. Eckley, Louise Lloyd","doi":"10.1016/j.ejor.2026.01.012","DOIUrl":"https://doi.org/10.1016/j.ejor.2026.01.012","url":null,"abstract":"We investigate the management of stock for a business with integrated online and offline store-fronts selling products facing uncertainty in demand. The integration of channels includes an opportunity for customers to have items sent directly to their home in case of a store stockout. We model a two-echelon divergent, periodic-review inventory model, with partial lost-sales at the store level and an online demand channel. The problem is developed as a Stochastic Dynamic Program minimising inventory costs. For the zero lead-time case, we prove desirable properties and develop ordering decisions based on optimality of a base-stock policy. For positive lead-time, we highlight the effectiveness of adding order caps to reduce system costs. In an extensive numerical study, we improve standard heuristic methods in the literature on costs by up to 19%. Further, we apply methods to real life data for a large mobile phone retailer, Tesco Mobile, with our methods outperforming the internal benchmark method. We show how the company’s target service level can be reached, with a reduction of inventory between 75% and 99% at the store level. By focusing on effective yet interpretable policies, we suggest methods that can be used to aid a decision maker in a practical context.","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"1 1","pages":""},"PeriodicalIF":6.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-10DOI: 10.1016/j.ejor.2026.01.005
Harun Avci , Barry L. Nelson , Eunhye Song , Andreas Wächter
Although much progress has been made in simulation optimization, problems involving computationally expensive simulations having high-dimensional, discrete decision-variable spaces have been stubbornly resistant to solution. For this class of problems we propose Dice and Slice Simulation Optimization (DASSO). DASSO is a form of Bayesian optimization that represents the prior on the objective function implied by the simulation as a sum of low-dimensional Gaussian Markov random fields. This prior is consistent with the full-dimensional objective function, rather than assuming that it is actually separable. By working iteratively between posteriors on these low-dimensional “dice” and a full-dimensional “slice” of the decision-variable space, DASSO makes rapid progress with little algorithm overhead even on problems with more than a trillion feasible solutions. We achieve further computational savings by showing that we can find the best solution to simulate on each iteration without having to assess the potential of all solutions—as is traditionally done in Bayesian optimization—by identifying a small set of Pareto-optimal solutions in subsets of the dimensions. We prove that DASSO is asymptotically convergent to the optimal solution, while emphasizing that its most important feature is the ability to find good solutions quickly in problems beyond the capability of other methods.
尽管在模拟优化方面取得了很大的进展,但涉及计算成本高、具有高维离散决策变量空间的模拟问题一直顽固地抵制解决。针对这类问题,我们提出了Dice and Slice Simulation Optimization (DASSO)。DASSO是贝叶斯优化的一种形式,它将模拟中隐含的目标函数的先验表示为低维高斯马尔可夫随机场的和。这个先验是与全维目标函数一致的,而不是假设它实际上是可分离的。通过在这些低维“骰子”和决策变量空间的全维“切片”的后置之间迭代工作,DASSO即使在具有超过一万亿可行解决方案的问题上也能以很少的算法开销取得快速进展。我们进一步节省了计算量,因为我们可以在每次迭代中找到模拟的最佳解决方案,而不必像传统的贝叶斯优化那样,通过识别维度子集中的一小组帕累托最优解决方案来评估所有解决方案的潜力。我们证明了DASSO是渐近收敛于最优解的,同时强调了它最重要的特征是在问题中快速找到好的解的能力,而不是其他方法的能力。
{"title":"Dice and slice simulation optimization for high-dimensional discrete problems","authors":"Harun Avci , Barry L. Nelson , Eunhye Song , Andreas Wächter","doi":"10.1016/j.ejor.2026.01.005","DOIUrl":"10.1016/j.ejor.2026.01.005","url":null,"abstract":"<div><div>Although much progress has been made in simulation optimization, problems involving computationally expensive simulations having high-dimensional, <em>discrete</em> decision-variable spaces have been stubbornly resistant to solution. For this class of problems we propose Dice and Slice Simulation Optimization (DASSO). DASSO is a form of Bayesian optimization that represents the prior on the objective function implied by the simulation as a sum of low-dimensional Gaussian Markov random fields. This prior is consistent with the full-dimensional objective function, rather than assuming that it is actually separable. By working iteratively between posteriors on these low-dimensional “dice” and a full-dimensional “slice” of the decision-variable space, DASSO makes rapid progress with little algorithm overhead even on problems with more than a trillion feasible solutions. We achieve further computational savings by showing that we can find the best solution to simulate on each iteration without having to assess the potential of all solutions—as is traditionally done in Bayesian optimization—by identifying a small set of Pareto-optimal solutions in subsets of the dimensions. We prove that DASSO is asymptotically convergent to the optimal solution, while emphasizing that its most important feature is the ability to find good solutions quickly in problems beyond the capability of other methods.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"330 3","pages":"Pages 850-863"},"PeriodicalIF":6.0,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.ejor.2026.01.007
Narges Sereshti , Merve Bodur , James R. Luedtke
We consider a multi-stage stochastic multi-product lot-sizing problem with service level constraints and supplier-driven product substitution. A firm has the option to meet demand from substitutable products at a cost. Considering the uncertainty in future demands, the firm wishes to make ordering decisions in every period such that the probability that all demands can be met in the next period meets or exceeds a minimum service level. We propose a rolling-horizon policy in which a two-stage joint chance-constrained stochastic program is solved to make decisions in each time period. We demonstrate how to effectively solve this formulation. In addition, we propose two policies based on deterministic approximations. On test problems with a downward substitution structure, we show that the proposed chance-constraint policy can achieve the service levels more reliably and at a lower cost. We also explore the value of product substitution in this model, demonstrating that the substitution option allows achieving service levels while reducing costs by 7% to 25% in our experiments, and that the majority of the benefit can be obtained with limited levels of substitution allowed.
{"title":"Stochastic dynamic lot-sizing with supplier-driven substitution and service level constraints","authors":"Narges Sereshti , Merve Bodur , James R. Luedtke","doi":"10.1016/j.ejor.2026.01.007","DOIUrl":"10.1016/j.ejor.2026.01.007","url":null,"abstract":"<div><div>We consider a multi-stage stochastic multi-product lot-sizing problem with service level constraints and supplier-driven product substitution. A firm has the option to meet demand from substitutable products at a cost. Considering the uncertainty in future demands, the firm wishes to make ordering decisions in every period such that the probability that all demands can be met in the next period meets or exceeds a minimum service level. We propose a rolling-horizon policy in which a two-stage joint chance-constrained stochastic program is solved to make decisions in each time period. We demonstrate how to effectively solve this formulation. In addition, we propose two policies based on deterministic approximations. On test problems with a downward substitution structure, we show that the proposed chance-constraint policy can achieve the service levels more reliably and at a lower cost. We also explore the value of product substitution in this model, demonstrating that the substitution option allows achieving service levels while reducing costs by 7% to 25% in our experiments, and that the majority of the benefit can be obtained with limited levels of substitution allowed.</div></div>","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"330 3","pages":"Pages 864-884"},"PeriodicalIF":6.0,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.ejor.2025.12.046
Márton Benedek, Péter Biró, Gergely Csáji, Matthew Johnson, Daniël Paulusma, Xin Ye
In kidney exchange programmes, patients with incompatible donors obtain kidneys via cycles of transplants. Countries may merge their national patient-donor pools to form international programmes. To ensure fairness, a credit-based system is used: a cooperative game-theoretic solution concept prescribes a “fair” initial allocation, which is adjusted with accumulated credits to form a target allocation. The objective is to maximize the number of transplants while staying close to the target allocation. When only 2-cycles are permitted, a solution that lexicographically minimizes deviations from the target can be found in polynomial time. However, even the problem of maximizing the number of transplants is NP-hard for larger upper bounds on cycle length. This latter problem is tractable when cycle lengths are not bounded. We formalize this setting via a new class of cooperative games called partitioned permutation games, and prove that computing an optimal solution that is lexicographically closest to the target allocation is NP-hard. We give a randomized XP-time algorithm for solve this problem exactly. We present an experimental study, simulating programmes with up to 10 countries. Allowing unbounded cycle lengths increases the number of transplants by up to 46% compared to 2-cycles. Using credits and selecting lexicographically closest solutions yields low total relative deviation (below 2% for all fairness notions). Among the seven fairness notions tested, a modified Banzhaf value performs best in balancing fairness and efficiency, achieving average deviations below 0.65%. Lexicographic minimization from the target allocation leads to significantly (36−56%) smaller average deviations than minimizing maximum difference only.
{"title":"Computing Balanced Solutions for Large International Kidney Exchange Schemes When Cycle Length Is Unbounded","authors":"Márton Benedek, Péter Biró, Gergely Csáji, Matthew Johnson, Daniël Paulusma, Xin Ye","doi":"10.1016/j.ejor.2025.12.046","DOIUrl":"https://doi.org/10.1016/j.ejor.2025.12.046","url":null,"abstract":"In kidney exchange programmes, patients with incompatible donors obtain kidneys via cycles of transplants. Countries may merge their national patient-donor pools to form international programmes. To ensure fairness, a credit-based system is used: a cooperative game-theoretic solution concept prescribes a “fair” initial allocation, which is adjusted with accumulated credits to form a target allocation. The objective is to maximize the number of transplants while staying close to the target allocation. When only 2-cycles are permitted, a solution that lexicographically minimizes deviations from the target can be found in polynomial time. However, even the problem of maximizing the number of transplants is <ce:sans-serif>NP</ce:sans-serif>-hard for larger upper bounds on cycle length. This latter problem is tractable when cycle lengths are not bounded. We formalize this setting via a new class of cooperative games called <ce:italic>partitioned permutation games</ce:italic>, and prove that computing an optimal solution that is lexicographically closest to the target allocation is <ce:sans-serif>NP</ce:sans-serif>-hard. We give a randomized XP-time algorithm for solve this problem exactly. We present an experimental study, simulating programmes with up to 10 countries. Allowing unbounded cycle lengths increases the number of transplants by up to 46% compared to 2-cycles. Using credits and selecting lexicographically closest solutions yields low total relative deviation (below 2% for all fairness notions). Among the seven fairness notions tested, a modified Banzhaf value performs best in balancing fairness and efficiency, achieving average deviations below 0.65%. Lexicographic minimization from the target allocation leads to significantly (<mml:math altimg=\"si32.svg\"><mml:mrow><mml:mn>36</mml:mn><mml:mspace width=\"0.16em\"></mml:mspace><mml:mo linebreak=\"goodbreak\">−</mml:mo><mml:mspace width=\"0.16em\"></mml:mspace><mml:mn>56</mml:mn></mml:mrow></mml:math>%) smaller average deviations than minimizing maximum difference only.","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"95 1","pages":""},"PeriodicalIF":6.4,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Multi-Agile Earth Observation Satellite Scheduling Problem (MAEOSSP) is a complex NP-hard optimization problem, characterized by resource constraints and highly nonlinear, time-dependent constraints. To address this challenge, we propose a Lagrangian Relaxation-based Heuristic (LRD-H) algorithm, a hybrid approach that integrates mathematical decomposition with tailored heuristics. The framework first employs Lagrangian Relaxation to decompose the MAEOSSP into independent single-satellite subproblems, which are solved by an efficient heuristic. Subsequently, it leverages dual information to construct high-quality feasible solutions, which are then enhanced by an iterative improvement procedure. Additionally, we provide a theoretical analysis demonstrating that the expected quality of our algorithm’s solutions monotonically improves with the computational effort allocated to the subproblem solver. Finally, extensive computational experiments show that LRD-H provides strong dual values for quality estimation and achieves significantly better solution quality compared to state-of-the-art benchmarks, especially on large-scale scenarios. Detailed ablation study empirically validates the critical role of our dual-information-guided solution construction and priority-aware improvement heuristics.
{"title":"A Lagrangian Relaxation-Based Heuristic Algorithm for Multiple Agile Earth Observation Satellite Scheduling with Time-Dependent Constraint","authors":"Feiran Wang, Yingwu Chen, Lei He, Jiawei Chen, Shilong Xu, Haiwu Huang","doi":"10.1016/j.ejor.2025.12.047","DOIUrl":"https://doi.org/10.1016/j.ejor.2025.12.047","url":null,"abstract":"The Multi-Agile Earth Observation Satellite Scheduling Problem (MAEOSSP) is a complex NP-hard optimization problem, characterized by resource constraints and highly nonlinear, time-dependent constraints. To address this challenge, we propose a Lagrangian Relaxation-based Heuristic (LRD-H) algorithm, a hybrid approach that integrates mathematical decomposition with tailored heuristics. The framework first employs Lagrangian Relaxation to decompose the MAEOSSP into independent single-satellite subproblems, which are solved by an efficient heuristic. Subsequently, it leverages dual information to construct high-quality feasible solutions, which are then enhanced by an iterative improvement procedure. Additionally, we provide a theoretical analysis demonstrating that the expected quality of our algorithm’s solutions monotonically improves with the computational effort allocated to the subproblem solver. Finally, extensive computational experiments show that LRD-H provides strong dual values for quality estimation and achieves significantly better solution quality compared to state-of-the-art benchmarks, especially on large-scale scenarios. Detailed ablation study empirically validates the critical role of our dual-information-guided solution construction and priority-aware improvement heuristics.","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"390 1","pages":""},"PeriodicalIF":6.4,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08DOI: 10.1016/j.ejor.2026.01.006
Mel T. Devine, Valentin Bertsch
Electricity consumers worldwide are investing in self-sufficiency technologies like solar photovoltaics and battery storage, often in markets dominated by oligopolistic generating firms that also consider generation investments. Previous models in the literature have not considered investment decisions on both the demand and the supply sides, nor the interactions between them. In this work, we study the interactions between investment decisions on both sides, and we investigate how price-making behaviour on the supply side affects these interactions. We introduce a novel stochastic equilibrium problem to model several players in an oligopolistic electricity market. On the supply side, we consider generating firms that make operational and investment decisions. On the demand side, we consider both industrial and residential consumers. This model enables us to examine how market power, feed-in premiums, and consumer prosumption influence self-sufficiency investments, consumer costs, and generation portfolios. It also allows us to explore how the interactions among these factors affect outcomes such as wholesale prices and carbon emissions. We apply the model to a case study of a stylised Irish electricity system in 2030. Our results demonstrate that price-making on the supply side increases investment in self-sufficiency on the demand side, which in turn reduces carbon emissions and lessens the increase in prices resulting from the presence of market power. We also find that both market power and self-sufficiency alter the investment decisions made by generation firms. Counter-intuitively, we also observe that the absence of a feed-in premium increases investment in solar generation on the demand side.
{"title":"Analysing the interactions between demand side and supply side investment decisions in an oligopolistic electricity market using a stochastic equilibrium model","authors":"Mel T. Devine, Valentin Bertsch","doi":"10.1016/j.ejor.2026.01.006","DOIUrl":"https://doi.org/10.1016/j.ejor.2026.01.006","url":null,"abstract":"Electricity consumers worldwide are investing in self-sufficiency technologies like solar photovoltaics and battery storage, often in markets dominated by oligopolistic generating firms that also consider generation investments. Previous models in the literature have not considered investment decisions on both the demand and the supply sides, nor the interactions between them. In this work, we study the interactions between investment decisions on both sides, and we investigate how price-making behaviour on the supply side affects these interactions. We introduce a novel stochastic equilibrium problem to model several players in an oligopolistic electricity market. On the supply side, we consider generating firms that make operational and investment decisions. On the demand side, we consider both industrial and residential consumers. This model enables us to examine how market power, feed-in premiums, and consumer prosumption influence self-sufficiency investments, consumer costs, and generation portfolios. It also allows us to explore how the interactions among these factors affect outcomes such as wholesale prices and carbon emissions. We apply the model to a case study of a stylised Irish electricity system in 2030. Our results demonstrate that price-making on the supply side increases investment in self-sufficiency on the demand side, which in turn reduces carbon emissions and lessens the increase in prices resulting from the presence of market power. We also find that both market power and self-sufficiency alter the investment decisions made by generation firms. Counter-intuitively, we also observe that the absence of a feed-in premium increases investment in solar generation on the demand side.","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"46 1","pages":""},"PeriodicalIF":6.4,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08DOI: 10.1016/j.ejor.2026.01.008
Yashuang Wei, Guofang Nan, Hubert Pun
In response to rising privacy concerns from potential data misuse fueled by digital development, policymakers have implemented various privacy regulation policies. These regulations are progressively enhancing consumers’ control over their personal data, making it commonplace for them to make informed decisions about data sharing. Using an analytical framework, we examine how consumers’ data control rights shape consumer-firm interactions and decisions. Interestingly, we find that the data rights regulation consistently motivates firms to set higher product prices. Moreover, we show that this regulation for consumers can confer benefits onto firms in both monopoly and duopoly settings. In a duopoly market, data rights regulation may counter the Matthew effect by redistributing competitive advantages from superior to inferior firms, reducing monopolization risks. Unfortunately, our findings indicate that granting consumers data control rights can reduce their surplus, as they may have to pay higher prices for the privacy security these rights provide.
{"title":"Privacy Concerns and Data Rights Regulation in Digital Markets","authors":"Yashuang Wei, Guofang Nan, Hubert Pun","doi":"10.1016/j.ejor.2026.01.008","DOIUrl":"https://doi.org/10.1016/j.ejor.2026.01.008","url":null,"abstract":"In response to rising privacy concerns from potential data misuse fueled by digital development, policymakers have implemented various privacy regulation policies. These regulations are progressively enhancing consumers’ control over their personal data, making it commonplace for them to make informed decisions about data sharing. Using an analytical framework, we examine how consumers’ data control rights shape consumer-firm interactions and decisions. Interestingly, we find that the data rights regulation consistently motivates firms to set higher product prices. Moreover, we show that this regulation for consumers can confer benefits onto firms in both monopoly and duopoly settings. In a duopoly market, data rights regulation may counter the Matthew effect by redistributing competitive advantages from superior to inferior firms, reducing monopolization risks. Unfortunately, our findings indicate that granting consumers data control rights can reduce their surplus, as they may have to pay higher prices for the privacy security these rights provide.","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"7 1","pages":""},"PeriodicalIF":6.4,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08DOI: 10.1016/j.ejor.2026.01.004
Antonio Consolo, Edoardo Amaldi, Emilio Carrizosa
Decision trees are popular in survival analysis for their interpretability and ability to model complex relationships. Survival trees, which predict the timing of singular events using censored historical data, are typically built through heuristic approaches. Recently, there has been growing interest in globally optimized trees, where the overall tree is trained by minimizing the error function over all its parameters. We propose a new soft survival tree model (SST), with a soft splitting rule at each branch node, trained via a nonlinear optimization formulation amenable to decomposition. Since SSTs provide for every input vector a specific survival function associated to a single leaf node, they satisfy the conditional computation property and inherit the related benefits. SST and the training formulation combine flexibility with interpretability: any smooth survival function (parametric, semiparametric, or nonparametric) estimated through maximum likelihood can be used, and each leaf node of an SST yields a cluster of distinct survival functions which are associated to the data points routed to it. Numerical experiments on 15 well-known datasets show that SSTs, with parametric and spline-based semiparametric survival functions, trained using an adaptation of the node-based decomposition algorithm proposed by Consolo et al. (2024) for soft regression trees, outperform three benchmark survival trees in terms of four widely-used discrimination and calibration measures. SSTs can also be extended to consider group fairness.
{"title":"Soft decision trees for survival analysis","authors":"Antonio Consolo, Edoardo Amaldi, Emilio Carrizosa","doi":"10.1016/j.ejor.2026.01.004","DOIUrl":"https://doi.org/10.1016/j.ejor.2026.01.004","url":null,"abstract":"Decision trees are popular in survival analysis for their interpretability and ability to model complex relationships. Survival trees, which predict the timing of singular events using censored historical data, are typically built through heuristic approaches. Recently, there has been growing interest in globally optimized trees, where the overall tree is trained by minimizing the error function over all its parameters. We propose a new soft survival tree model (SST), with a soft splitting rule at each branch node, trained via a nonlinear optimization formulation amenable to decomposition. Since SSTs provide for every input vector a specific survival function associated to a single leaf node, they satisfy the conditional computation property and inherit the related benefits. SST and the training formulation combine flexibility with interpretability: any smooth survival function (parametric, semiparametric, or nonparametric) estimated through maximum likelihood can be used, and each leaf node of an SST yields a cluster of distinct survival functions which are associated to the data points routed to it. Numerical experiments on 15 well-known datasets show that SSTs, with parametric and spline-based semiparametric survival functions, trained using an adaptation of the node-based decomposition algorithm proposed by Consolo et al. (2024) for soft regression trees, outperform three benchmark survival trees in terms of four widely-used discrimination and calibration measures. SSTs can also be extended to consider group fairness.","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"1 1","pages":""},"PeriodicalIF":6.4,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1016/j.ejor.2025.12.042
Zhaleh Rahimi, Douglas G. Down, Na Li, Donald M. Arnold
We investigate optimal ordering policies for a multi-item periodic-review inventory system, considering demand correlations and historical data for the products involved. We extend inventory models by transitioning from an autoregressive moving average (ARMA) demand process to a vector autoregressive moving average (VARMA) framework, explicitly characterizing optimal ordering policies when there is both autocorrelation and cross-correlation among multiple items. Through experimental studies, we evaluate inventory costs and cost improvements compared to multi-item ordering policies where demands are assumed to be independent under different degrees of correlation, noise levels, and training data window sizes. The results show that the framework effectively reduces inventory costs, particularly for products with moderate to high dependence. Cost reductions can reach up to 25% for moderate and up to 65% for strong dependence. We also apply our findings to real-world data to optimize inventory policies for immunoglobulin sub-products, intravenous (IVIg) and subcutaneous (SCIg), demonstrating cost improvements using the proposed policy. Furthermore, an empirical study analyzing a large sales dataset reinforces the applicability of our approach.
{"title":"Ordering policies for multi-item inventory systems with correlated demands","authors":"Zhaleh Rahimi, Douglas G. Down, Na Li, Donald M. Arnold","doi":"10.1016/j.ejor.2025.12.042","DOIUrl":"https://doi.org/10.1016/j.ejor.2025.12.042","url":null,"abstract":"We investigate optimal ordering policies for a multi-item periodic-review inventory system, considering demand correlations and historical data for the products involved. We extend inventory models by transitioning from an autoregressive moving average (ARMA) demand process to a vector autoregressive moving average (VARMA) framework, explicitly characterizing optimal ordering policies when there is both autocorrelation and cross-correlation among multiple items. Through experimental studies, we evaluate inventory costs and cost improvements compared to multi-item ordering policies where demands are assumed to be independent under different degrees of correlation, noise levels, and training data window sizes. The results show that the framework effectively reduces inventory costs, particularly for products with moderate to high dependence. Cost reductions can reach up to 25% for moderate and up to 65% for strong dependence. We also apply our findings to real-world data to optimize inventory policies for immunoglobulin sub-products, intravenous (IVIg) and subcutaneous (SCIg), demonstrating cost improvements using the proposed policy. Furthermore, an empirical study analyzing a large sales dataset reinforces the applicability of our approach.","PeriodicalId":55161,"journal":{"name":"European Journal of Operational Research","volume":"95 1","pages":""},"PeriodicalIF":6.4,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145956863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}