Let be convex, compact, with nonempty interior and h be Legendre with domain C, continuous on C. We prove that h is Bregman if and only if it is strictly convex on C and C is a polytope. This provides insights on sequential convergence of many Bregman divergence based algorithm: abstract compatibility conditions between Bregman and Euclidean topology may equivalently be replaced by explicit conditions on h and C. This also emphasizes that a general convergence theory for these methods (beyond polyhedral domains) would require more refinements than Bregman's conditions.
A phase-type distributed random variable represents the time to absorption of a Markov chain with an absorbing state. In this letter, we show that the alias method can be modified to efficiently generate phase-type distributed random variables. Both initialisation and generation are fast, at the cost of larger memory requirements. Numerical experiments show that the proposed method significantly reduces the computation time compared to direct simulation.
This paper investigates non-cooperative behavior in single-machine scheduling and aims to implement the equal gain splitting (EGS) rule via a designed bargaining game. In this game, randomly chosen proposers make offers that coalition members can accept or reject, with the goal of reaching subgame perfect equilibria that align with the EGS rule. Additionally, we extend the game to implement the weighted gain splitting (WGS) rule, which is a generalization of the EGS rule considering players' weights.
Smoothing accelerated gradient methods achieve faster convergence rates than that of the subgradient method for some nonsmooth convex optimization problems. However, Nesterov's extrapolation may require gradients at infeasible points, and thus they cannot be applied to some structural optimization problems. We introduce a variant of smoothing accelerated projected gradient methods where every variable is feasible. The convergence rate is obtained using the Lyapunov function. We conduct a numerical experiment on the robust compliance optimization of a truss structure.
Reducing inefficient truck movements, this research investigates the potential of freight consolidation through carrier collaboration. Considering the financial benefits of consolidation as well as the additional cost arising from collaboration, we introduce a cooperative game in which several carriers can collaborate by pooling transportation capacities. Although the core of this game can be empty, we provide three conditions under which core non-emptiness is preserved. Numerical experiments indicate, moreover, that core non-emptiness is also likely outside these conditions.
We consider cost allocation for set covering problems. We allocate as much cost to the elements (players) as possible without violating the group rationality condition, and so that the excess vector is lexicographically maximized. This happy nucleolus has several nice properties. In particular, we show that it can be computed considering a small subset of “simple” coalitions only. While computing the nucleolus for set covering is NP-hard, our results imply that the happy nucleolus can be computed in polynomial time.
The change-making problem (CMP), introduced in 1970, is a classic problem in combinatorial optimisation. It was proven to be NP-hard in 1975, but it can be solved in pseudo-polynomial time by dynamic programming. In 1999, Heipcke presented a variant of the CMP which, at first glance, looks harder than the standard version. We show that, in fact, her variant can be solved in polynomial time.
We consider a lift-and-project approach for the cardinality-constrained Boolean quadric polytope. Some upper bounds for the distance between the polytope and its linear approximation are derived. Unsurprisingly, the distance converges to 0 when the number of variables increases sufficiently.
The principle of optimality is a fundamental aspect of dynamic programming, which states that the optimal solution to a dynamic optimization problem can be found by combining the optimal solutions to its sub-problems. While this principle is generally applicable, it is often only taught for problems with finite or countable state spaces in order to sidestep measure-theoretic complexities. Therefore, it cannot be applied to classic models such as inventory management and dynamic pricing models that have continuous state spaces, and students may not be aware of the possible challenges involved in studying dynamic programming models with general state spaces. To address this, we provide conditions and a self-contained simple proof that establish when the principle of optimality for discounted dynamic programming is valid. These conditions shed light on the difficulties that may arise in the general state space case. We provide examples from the literature that include the relatively involved case of universally measurable dynamic programming and the simple case of finite dynamic programming where our main result can be applied to show that the principle of optimality holds.