Sort by year:

Given its wide spectrum of applications, the classical problem of all-terminal network reliability evaluation remains a highly relevant problem in network design. The associated optimization problem—to find a network with the best possible reliability under multiple constraints—presents an even more complex challenge, which has been addressed in the scientific literature but usually under strong assumptions over failures probabilities and/or the network topology. In this work, we propose a novel reliability optimization framework for network design with failures probabilities that are independent but not necessarily identical. We leverage the linear-time evaluation procedure for network reliability in the series-parallel graphs of Satyanarayana and Wood (1985) to formulate the reliability optimization problem as a mixed-integer nonlinear optimization problem. To solve this nonconvex problem, we use classical convex envelopes of bilinear functions, introduce custom cutting planes, and propose a new family of convex envelopes for expressions that appear in the evaluation of network reliability. Furthermore, we exploit the refinements produced by spatial branch-and-bound to locally strengthen our convex relaxations. Our experiments show that, using our framework, one can efficiently obtain optimal solutions in challenging instances of this problem.

Convexification based on convex envelopes is ubiquitous in the non-linear optimization literature. Thanks to considerable efforts of the optimization community for decades, we are able to compute the convex envelopes of a considerable number of functions that appear in practice, and thus obtain tight and tractable approximations to challenging problems. We contribute to this line of work by considering a family of functions that, to the best of our knowledge, has not been considered before in the literature. We call this family *ray-concave* functions. We show sufficient conditions that allow us to easily compute closed-form expressions for the convex envelope of ray-concave functions over arbitrary polytopes. With these tools, we are able to provide new perspectives to previously known convex envelopes and derive a previously unknown convex envelope for a function that arises in probability contexts.

Benders decomposition is one of the most applied methods to solve two-stage

stochastic problems (TSSP) with a large number of scenarios. The main idea

behind the Benders decomposition is to solve a large problem by replacing the

values of the second-stage subproblems with individual variables, and

progressively forcing those variables to reach the optimal value of the

subproblems, dynamically inserting additional valid constraints, known as

Benders cuts. Most traditional implementations add a cut for each scenario

(multi-cut) or a single-cut that includes all scenarios. In this paper we

present a novel Benders adaptive-cuts method, where the Benders cuts are

aggregated according to a partition of the scenarios, which is dynamically

refined using the LP-dual information of the subproblems. This scenario

aggregation/disaggregation is based on the Generalized Adaptive Partitioning

Method (GAPM), which has been successfully applied to TSSPs. We formalize this

hybridization of Benders decomposition and the GAPM, by providing sufficient

conditions under which an optimal solution of the deterministic equivalent can

be obtained in a finite number of iterations. Our new method can be interpreted

as a compromise between the Benders single-cuts and multi-cuts methods, drawing

on the advantages of both sides, by rendering the initial iterations faster (as

for the single-cuts Benders) and ensuring the overall faster convergence (as

for the multi-cuts Benders). Computational experiments on two stochastic

network flow problems validate these statements, showing that the new method

outperforms the other implementations of Benders method, as well as other

standard methods for solving TSSPs, in particular when the number of scenarios

is very large.

Resource-constrained project scheduling problems (RCPSP) are at the heart of many production planning problems across a plethora of applications. Although the problem has been studied since the early 1960s, most developments and test instances are limited to problems with less than 300 jobs, far from the thousands present in real-life scenarios. Furthermore, the RCPSP with discounted cost (DC) is critical in many of these settings, which require decision makers to evaluate the net present value of the finished tasks, but the non-linear cost function makes the problem harder to solve or analyze.

In this work, we propose a novel approximation algorithm for the RCPSP-DC. Our main contribution is that, through the use of geometrically increasing intervals, we can construct an approximation algorithm, keeping track of precedence constraints, usage of multiple resources, and time requirements. To our knowledge, this is the first approximation algorithm for this problem. Finally, through experimental analysis over real instances, we report the empirical performance of our approach, showing that our technique allows us to solve sizeable underground mining problems within reasonable time frames and gaps much smaller than the theoretically computed ones.

Natural hazards cause major power outages as a result of spatially-correlated failures of network components. However, these correlations between failures of individual elements are often ignored in probabilistic planning models for optimal network design. We use different types of planning models to demonstrate the impact of ignoring correlations between component failures and the value of flexible transmission assets when power systems are exposed to natural hazards. We consider a network that is hypothetically located in northern Chile, a region that is prone to earthquakes. Using a simulation model, we compute the probabilities of spatially-correlated outages of transmission and substations based on information about historical earthquakes in the area. We determine optimal network designs using a deterministic reliability criterion and probabilistic models that either consider or disregard correlations among component failures. Our results show that the probability of a simultaneous failure of two transmission elements exposed to an earthquake can be up to 15 times higher than the probability simultaneous failure of the same two elements when we only consider independent component failures. Disregarding correlations of component failures changes the optimal network design significantly and increases the expected levels of curtailed demand in scenarios with spatially-correlated failures. We also find that, in some cases, it becomes optimal to invest in HVDC instead of AC transmission lines because the former gives the system operator the flexibility to control power flows in meshed transmission networks. This feature is particularly valuable to systems exposed to natural hazards, where network topologies in post-contingency operating conditions might differ significantly from pre-contingency ones.

Dataset available at https://github.com/borelian/j.ejor.2017.09.023

Articles

Transportation Research Part C: Emerging Technologies, 71: 86—107, 2016

Publication year: 2016

We consider the problem of separating maximally violated inequalities for the precedence constrained knapsack problem. Though we consider maximally violated constraints in a very general way, special emphasis is placed on induced cover inequalities and induced clique inequalities. Our contributions include a new partial characterization of maximally violated inequalities, a new safe shrinking technique, and new insights on strengthening and lifting. This work follows on the work of Boyd (1993), Park and Park (1997), van de Leensel et al. (1999) and Boland et al. (2011). Computational experiments show that our new techniques and insights can be used to significantly improve the performance of cutting plane algorithms for this problem.