Main Content

Mixed-Integer Linear Programming (MILP) Algorithms

Mixed-Integer Linear Programming Definition

A mixed-integer linear program (MILP) is a problem with

  • Linear objective function, fTx, where f is a column vector of constants, and x is the column vector of unknowns

  • Bounds and linear constraints, but no nonlinear constraints (for definitions, see Write Constraints)

  • Restrictions on some components of x to have integer values

In mathematical terms, given vectors f, lb, and ub, matrices A and Aeq, corresponding vectors b and beq, and a set of indices intcon, find a vector x to solve

minxfTx subject to {x(intcon) are integersAxbAeqx=beqlbxub.

Legacy intlinprog Algorithm

Algorithm Overview

The 'legacy' intlinprog algorithm uses this basic strategy to solve mixed-integer linear programs. intlinprog can solve the problem in any of the stages. If it solves the problem in a stage, intlinprog does not execute the later stages.

  1. Reduce the problem size using Linear Program Preprocessing.

  2. Solve an initial relaxed (noninteger) problem using Linear Programming.

  3. Perform Mixed-Integer Program Preprocessing to tighten the LP relaxation of the mixed-integer problem.

  4. Try Cut Generation to further tighten the LP relaxation of the mixed-integer problem.

  5. Try to find integer-feasible solutions using heuristics.

  6. Use a Branch and Bound algorithm to search systematically for the optimal solution. This algorithm solves LP relaxations with restricted ranges of possible values of the integer variables. It attempts to generate a sequence of updated bounds on the optimal objective function value.

Linear Program Preprocessing

According to the Mixed-Integer Linear Programming Definition, there are matrices A and Aeq and corresponding vectors b and beq that encode a set of linear inequalities and linear equalities

A·xbAeq·x=beq.

These linear constraints restrict the solution x.

Usually, it is possible to reduce the number of variables in the problem (the number of components of x), and reduce the number of linear constraints. While performing these reductions can take time for the solver, they usually lower the overall time to solution, and can make larger problems solvable. The algorithms can make solution more numerically stable. Furthermore, these algorithms can sometimes detect an infeasible problem.

Preprocessing steps aim to eliminate redundant variables and constraints, improve the scaling of the model and sparsity of the constraint matrix, strengthen the bounds on variables, and detect the primal and dual infeasibility of the model.

For details, see Andersen and Andersen [3] and Mészáros and Suhl [10].

Linear Programming

The initial relaxed problem is the linear programming problem with the same objective and constraints as Mixed-Integer Linear Programming Definition, but no integer constraints. Call xLP the solution to the relaxed problem, and x the solution to the original problem with integer constraints. Clearly,

fTxLPfTx,

because xLP minimizes the same function but with fewer restrictions.

This initial relaxed LP (root node LP) and all generated LP relaxations during the branch-and-bound algorithm are solved using linear programming solution techniques.

Mixed-Integer Program Preprocessing

During mixed-integer program preprocessing, intlinprog analyzes the linear inequalities A*x ≤ b along with integrality restrictions to determine whether:

  • The problem is infeasible.

  • Some bounds can be tightened, and therefore some variables can be fixed.

  • Some inequalities are redundant, so can be ignored or removed.

  • Some inequalities can be strengthened.

The IntegerPreprocess option lets you choose whether intlinprog takes several steps, takes all of them, or takes almost none of them. If you include an x0 argument, intlinprog uses that value in preprocessing.

The main goal of mixed-integer program preprocessing is to simplify ensuing branch-and-bound calculations. Preprocessing involves quickly preexamining and eliminating some of the futile subproblem candidates that branch-and-bound would otherwise analyze.

For details about integer preprocessing, see Achterberg et al. [1].

Cut Generation

Cuts are additional linear inequality constraints that intlinprog adds to the problem. These inequalities attempt to restrict the feasible region of the LP relaxations so that their solutions are closer to integers. You control the type of cuts that intlinprog uses with the CutGeneration option.

'basic' cuts include:

  • Mixed-integer rounding cuts

  • Gomory cuts

  • Clique cuts

  • Cover cuts

  • Flow cover cuts

Furthermore, if the problem is purely integer (all variables are integer-valued), then intlinprog also uses the following cuts:

  • Strong Chvatal-Gomory cuts

  • Zero-half cuts

'intermediate' cuts include all 'basic' cuts, plus:

  • Simple lift-and-project cuts

  • Simple pivot-and-reduce cuts

  • Reduce-and-split cuts

'advanced' cuts include all 'intermediate' cuts except reduce-and-split cuts, plus:

  • Strong Chvatal-Gomory cuts

  • Zero-half cuts

For purely integer problems, 'intermediate' uses the most cut types, because it uses reduce-and-split cuts, while 'advanced' does not.

Another option, CutMaxIterations, specifies an upper bound on the number of times intlinprog iterates to generate cuts.

For details about cut generation algorithms (also called cutting plane methods), see Cornuéjols [6] and, for clique cuts, Atamtürk, Nemhauser, and Savelsbergh [4].

Heuristics for Finding Feasible Solutions

To get an upper bound on the objective function, the branch-and-bound procedure must find feasible points. A solution to an LP relaxation during branch-and-bound can be integer feasible, which can provide an improved upper bound to the original MILP. Certain techniques find feasible points faster before or during branch-and-bound. intlinprog uses these techniques at the root node and during some branch-and-bound iterations. These techniques are heuristic, meaning they are algorithms that can succeed but can also fail.

Heuristics can be start heuristics, which help the solver find an initial or new integer-feasible solution. Or heuristics can be improvement heuristics, which start at an integer-feasible point and attempt to find a better integer-feasible point, meaning one with lower objective function value. The intlinprog improvement heuristics are 'rins', 'rss', 1-opt, 2-opt, and guided diving.

Set the intlinprog heuristics using the 'Heuristics' option. The options are:

OptionDescription
'basic' (default)

The solver runs rounding heuristics twice with different parameters, runs diving heuristics twice with different parameters, then runs 'rss'. The solver does not run later heuristics when earlier heuristics lead to a sufficiently good integer-feasible solution.

'intermediate'

The solver runs rounding heuristics twice with different parameters, then runs diving heuristics twice with different parameters. If there is an integer-feasible solution, the solver then runs 'rins' followed by 'rss'. If 'rss' finds a new solution, the solver runs 'rins' again. The solver does not run later heuristics when earlier heuristics lead to a sufficiently good integer-feasible solution.

'advanced'

The solver runs rounding heuristics twice with different parameters, then runs diving heuristics twice with different parameters. If there is an integer-feasible solution, the solver then runs 'rins' followed by 'rss'. If 'rss' finds a new solution, the solver runs 'rins' again. The solver does not run later heuristics when earlier heuristics lead to a sufficiently good integer-feasible solution.

'rins' or the equivalent 'rins-diving'

intlinprog searches the neighborhood of the current, best integer-feasible solution point (if available) to find a new and better solution. See Danna, Rothberg, and Le Pape [8]. When you select 'rins', the solver runs rounding heuristics twice with different parameters, runs diving heuristics twice with different parameters, then runs 'rins'.

'rss' or the equivalent 'rss-diving'

intlinprog applies a hybrid procedure combining ideas from 'rins' and local branching to search for integer-feasible solutions. When you select 'rss', the solver runs rounding heuristics twice with different parameters, runs diving heuristics twice with different parameters, then runs 'rss'. The solver does not run later heuristics when earlier heuristics lead to a sufficiently good integer-feasible solution. These settings perform the same heuristics as 'basic'.

'round'

intlinprog takes the LP solution to the relaxed problem at a node, and rounds the integer components in a way that attempts to maintain feasibility. When you select 'round', the solver, at the root node, runs rounding heuristics twice with different parameters, then runs diving heuristics twice with different parameters. Thereafter, the solver runs only rounding heuristics at some branch-and-bound nodes.

'round-diving'

The solver works in a similar way to 'round', but also runs diving heuristics (in addition to rounding heuristics) at some branch-and-bound nodes, not just the root node.

'diving'

intlinprog uses heuristics that are similar to branch-and-bound steps, but follow just one branch of the tree down, without creating the other branches. This single branch leads to a fast “dive” down the tree fragment, thus the name “diving.” Currently, intlinprog uses six diving heuristics in this order:

  • Vector length diving

  • Coefficient diving

  • Fractional diving

  • Pseudo cost diving

  • Line search diving

  • Guided diving (applies when the solver already found at least one integer-feasible point)

Diving heuristics generally select one variable that should be integer-valued, for which the current solution is fractional. The heuristics then introduce a bound that forces the variable to be integer-valued, and solve the associated relaxed LP again. The method of choosing the variable to bound is the main difference between the diving heuristics. See Berthold [5], Section 3.1.

'none'

intlinprog does not search for a feasible point. The solver simply takes any feasible point it encounters in its branch-and-bound search.

The main difference between 'intermediate' and 'advanced' is that 'advanced' runs heuristics more frequently during branch-and-bound iterations.

In addition to the previous table, the following heuristics run when the Heuristics option is 'basic', 'intermediate', or 'advanced'.

  • ZI round — This heuristic runs whenever an algorithm solves a relaxed LP. The heuristic goes through each fractional integer variable to attempt to shift it to a neighboring integer without affecting the feasibility with respect to other constraints. For details, see Hendel [9].

  • 1-opt — This heuristic runs whenever an algorithm finds a new integer-feasible solution. The heuristic goes through each integer variable to attempt to shift it to a neighboring integer without affecting the feasibility with respect to other constraints, while lowering the objective function value.

  • 2-opt — This heuristic runs whenever an algorithm finds a new integer-feasible solution. 2-opt finds all pairs of integer variables that affect the same constraint, meaning they have nonzero entries in the same row of an A or Aeq constraint matrix. For each pair, 2-opt takes an integer-feasible solution and moves the values of the variable pairs up or down using all four possible moves (up-up, up-down, down-up, and down-down), looking for a feasible neighboring solution that has a better objective function value. The algorithm tests each integer variable pair by calculating the largest size (same magnitude) of shifts for each variable in the pair that satisfies the constraints and also improves the objective function value.

At the beginning of the heuristics phase, intlinprog runs the trivial heuristic unless Heuristics is 'none' or you provide an initial integer-feasible point in the x0 argument. The trivial heuristic checks the following points for feasibility:

  • All zeros

  • Upper bound

  • Lower bound (if nonzero)

  • "Lock" point

The "lock" point is defined only for problems with finite upper and lower bounds for all variables. The "lock" point for each variable is its upper or lower bound, chosen as follows. For each variable j, count the number of corresponding positive entries in the linear constraint matrix A(:,j) and subtract the number corresponding negative entries. If the result is positive, use the lower bound for that variable, lb(j). Otherwise, use the upper bound for that variable, ub(j). The "lock" point attempts to satisfy the largest number of linear inequality constraints for each variable, but is not necessarily feasible.

After each heuristic completes with a feasible solution, intlinprog calls output functions and plot functions. See intlinprog Output Function and Plot Function Syntax.

If you include an x0 argument, intlinprog uses that value in the 'rins' and guided diving heuristics until it finds a better integer-feasible point. So when you provide x0, you can obtain good results by setting the 'Heuristics' option to 'rins-diving' or another setting that uses 'rins'.

Branch and Bound

The branch-and-bound method constructs a sequence of subproblems that attempt to converge to a solution of the MILP. The subproblems give a sequence of upper and lower bounds on the solution fTx. The first upper bound is any feasible solution, and the first lower bound is the solution to the relaxed problem. For a discussion of the upper bound, see Heuristics for Finding Feasible Solutions.

As explained in Linear Programming, any solution to the linear programming relaxed problem has a lower objective function value than the solution to the MILP. Also, any feasible point xfeas satisfies

fTxfeasfTx,

because fTx is the minimum among all feasible points.

In this context, a node is an LP with the same objective function, bounds, and linear constraints as the original problem, but without integer constraints, and with particular changes to the linear constraints or bounds. The root node is the original problem with no integer constraints and no changes to the linear constraints or bounds, meaning the root node is the initial relaxed LP.

From the starting bounds, the branch-and-bound method constructs new subproblems by branching from the root node. The branching step is taken heuristically, according to one of several rules. Each rule is based on the idea of splitting a problem by restricting one variable to be less than or equal to an integer J, or greater than or equal to J+1. These two subproblems arise when an entry in xLP, corresponding to an integer specified in intcon, is not an integer. Here, xLP is the solution to a relaxed problem. Take J as the floor of the variable (rounded down), and J+1 as the ceiling (rounded up). The resulting two problems have solutions that are larger than or equal to fTxLP, because they have more restrictions. Therefore, this procedure potentially raises the lower bound.

The performance of the branch-and-bound method depends on the rule for choosing which variable to split (the branching rule). The algorithm uses these rules, which you can set in the BranchRule option:

  • 'maxpscost' — Choose the fractional variable with maximal pseudocost.

     Pseudocost

  • 'strongpscost' — Similar to 'maxpscost', but instead of the pseudocost being initialized to 1 for each variable, the solver attempts to branch on a variable only after the pseudocost has a more reliable estimate. To obtain a more reliable estimate, the solver does the following (see Achterberg, Koch, and Martin [2]).

    • Order all potential branching variables (those that are currently fractional but should be integer) by their current pseudocost-based scores.

    • Run the two relaxed linear programs based on the current branching variable, starting from the variable with the highest score (if the variable has not yet been used for a branching calculation). The solver uses these two solutions to update the pseudocosts for the current branching variable. The solver can halt this process early to save time in choosing the branch.

    • Continue choosing variables in the list until the current highest pseudocost-based score does not change for k consecutive variables, where k is an internally chosen value, usually between 5 and 10.

    • Branch on the variable with the highest pseudocost-based score. The solver might have already computed the relaxed linear programs based on this variable during an earlier pseudocost estimation procedure.

    Because of the extra linear program solutions, each iteration of 'strongpscost' branching takes longer than the default 'maxpscost'. However, the number of branch-and-bound iterations typically decreases, so the 'strongpscost' method can save time overall.

  • 'reliability' — Similar to 'strongpscost', but instead of running the relaxed linear programs only for uninitialized pseudocost branches, 'reliability' runs the programs up to k2 times for each variable, where k2 is a small integer such as 4 or 8. Therefore, 'reliability' has even slower branching, but potentially fewer branch-and-bound iterations, compared to 'strongpscost'.

  • 'mostfractional' — Choose the variable with fractional part closest to 1/2.

  • 'maxfun' — Choose the variable with maximal corresponding absolute value in the objective vector f.

After the algorithm branches, there are two new nodes to explore. The algorithm chooses which node to explore among all that are available using one of these rules:

  • 'minobj' — Choose the node that has the lowest objective function value.

  • 'mininfeas' — Choose the node with the minimal sum of integer infeasibilities. This means for every integer-infeasible component x(i) in the node, add up the smaller of pi and pi+, where

    pi = x(i) – ⌊x(i)⌋
    pi+ = 1 – pi.

  • 'simplebestproj' — Choose the node with the best projection.

     Best Projection

intlinprog skips the analysis of some subproblems by considering information from the original problem such as the objective function’s greatest common divisor (GCD).

The branch-and-bound procedure continues, systematically generating subproblems to analyze and discarding the ones that won’t improve an upper or lower bound on the objective, until one of these stopping criteria is met:

  • The algorithm exceeds the MaxTime option.

  • The difference between the lower and upper bounds on the objective function is less than the AbsoluteGapTolerance or RelativeGapTolerance tolerances.

  • The number of explored nodes exceeds the MaxNodes option.

  • The number of integer feasible points exceeds the MaxFeasiblePoints option.

For details about the branch-and-bound procedure, see Nemhauser and Wolsey [11] and Wolsey [13].

HiGHS MILP Algorithm

Overview of HiGHS

The intlinprog "highs" algorithm is based on the HiGHS open-source software. intlinprog converts MATLAB®-formatted inputs and options into equivalent HiGHS arguments, and converts the returned solution into standard MATLAB format as well.

Algorithm Outline

The "highs" algorithm performs these steps.

  1. Presolve

  2. Evaluate Root Node

  3. Get a branch/bound node (if none then stop)

  4. Repeat until a stopping condition:

    1. Search until a stopping condition:

    2. Propagate domain

    3. Prune nodes, update bounds, exit if infeasibility detected or ObjectiveCutOff option value is reached

    4. If restart conditions are met, return to step 2

    5. Install the next node:

      1. Choose and evaluate a node

      2. If evaluation prunes this node, return to step 5

      3. Generate cuts for the node

      4. If domain is infeasible, cut off the node, and bring open nodes into the node queue

      5. Update the basis

      6. Go to step 4

Presolve

Usually, it is possible to reduce the number of variables in the problem (the number of components of x), and reduce the number of linear constraints. While performing these reductions can take time for the solver, they usually lower the overall time to solution, and can make larger problems solvable. The algorithms can make solution more numerically stable. Furthermore, these algorithms can sometimes detect an infeasible problem.

Presolve steps aim to eliminate redundant variables and constraints, improve the scaling of the model and sparsity of the constraint matrix, strengthen the bounds on variables, and detect the primal and dual infeasibility of the model. For background, see Andersen and Andersen [3] and Mészáros and Suhl [10].

During mixed-integer program preprocessing, intlinprog analyzes the linear inequalities A*x ≤ b along with integrality restrictions to determine whether:

  • The problem is infeasible.

  • Some bounds can be tightened.

  • Some inequalities are redundant, so can be ignored or removed.

  • Some inequalities can be strengthened.

  • Some integer variables can be fixed.

For background about integer preprocessing, see Achterberg et al. [1].

Evaluate Root Node

To evaluate the root node, the algorithm performs the following steps.

  1. Detect symmetry and simplify problem.

  2. Evaluate root LP.

  3. Generate and add LP cuts (see Cut Generation).

  4. Apply randomized rounding.

  5. Generate and add more LP cuts.

  6. Perform cut generation and heuristics in a loop.

  7. Check for restart conditions, restart at step 2 if warranted.

After the root node evaluation completes, the algorithm proceeds with Branch-and-Bound.

Cut Generation

Cuts are additional linear inequality constraints that intlinprog adds to the problem. These inequalities attempt to restrict the feasible region of the LP relaxations so that their solutions are closer to integers. For background about cut generation algorithms (also called cutting plane methods), see Cornuéjols [6] and Atamtürk, Nemhauser, and Savelsbergh [4].

Plunge/Diving

To find integer-feasible points, intlinprog uses heuristics that are similar to branch-and-bound steps, but follow just one branch of the tree down, without creating the other branches. This single branch leads to a fast “dive” down the tree fragment, thus the name “diving.”

Diving heuristics generally select one variable that should be integer-valued, for which the current solution is fractional. The heuristics then introduce a bound that forces the variable to be integer-valued, and solve the associated relaxed LP again. The method of choosing the variable to bound is the main difference between the diving heuristics. See Berthold [5], Section 3.1.

Randomized Rounding, RINS, and RENS

To find new integer-feasible points, intlinprog searches the neighborhood of the current, best integer-feasible solution point (if available) to find a new and better solution. See Danna, Rothberg, and Le Pape [8]. Similarly, to find new integer-feasible points, intlinprog takes the LP solution to the relaxed problem at a node, and rounds the integer components in a way that attempts to maintain feasibility. By taking randomized rounding steps, intlinprog can sometimes find a new feasible point. RENS, which stands for Relaxation Enforced Neighborhood Search, is another search technique for finding integer-feasible points. See Berthold [5].

Branch-and-Bound

The branch-and-bound method constructs a sequence of subproblems that attempt to converge to a solution of the MILP. The subproblems give a sequence of upper and lower bounds on the solution fTx. These bounds are called the primal and dual bounds. For a minimization problem, the first upper bound (primal) is any feasible solution, and the first lower bound (dual) is the solution to the relaxed problem. For a maximization problem, the primal bound is the lower bound and the dual bound is the upper bound. As explained in Linear Programming, for a minimization problem, any solution to the linear programming relaxed problem has a lower objective function value than the solution to the MILP. Also, any feasible point xfeas satisfies

fTxfeasfTx,(1)

because fTx is the minimum among all feasible points.

In this context, a node is an LP with the same objective function, bounds, and linear constraints as the original problem, but without integer constraints, and with particular changes to the linear constraints or bounds. The root node is the original problem with no integer constraints and no changes to the linear constraints or bounds, meaning the root node is the initial relaxed LP.

From the starting bounds, the branch-and-bound method constructs new subproblems by branching from the root node. The branching step is taken heuristically, according to one of several rules. Each rule is based on the idea of splitting a problem by restricting one variable to be less than or equal to an integer J, or greater than or equal to J+1. These two subproblems arise when an entry in xLP, corresponding to an integer specified in intcon, is not an integer. Here, xLP is the solution to a relaxed problem. Take J as the floor of the variable (rounded down), and J+1 as the ceiling (rounded up). The resulting two problems have solutions that are larger than or equal to fTxLP, because they have more restrictions. Therefore, for a minimization problem this procedure potentially raises the lower bound.

After the algorithm branches, there are two new nodes to explore. intlinprog skips the analysis of some subproblems by comparing their objective function values with the current global bounds.

The branch-and-bound procedure continues, systematically generating subproblems to analyze and discarding the ones that will not improve an upper or lower bound on the objective, until one of these stopping criteria is met:

  • The problem is proved to be infeasible.

  • The objective function value reaches the ObjectiveCutOff limit.

  • The algorithm exceeds the MaxTime option.

  • The difference between the lower and upper bounds on the objective function is less than the AbsoluteGapTolerance or RelativeGapTolerance tolerances.

  • The number of explored nodes exceeds the MaxNodes option.

For background about the branch-and-bound procedure, see Nemhauser and Wolsey [11] and Wolsey [13].

Iterative Display

When you select iterative display by setting the Display option to the default "iter", the solver displays some of its steps. HiGHS iterative display is more extensive and complicated than the iterative display of other solvers. Furthermore, the HiGHS algorithm can restart its branch-and-bound search, leading to an iterative display that also restarts.

To select iterative display:

options = optimoptions("intlinprog",Display="iter");
[x,fval,exitflag,output] = intlinprog(f,intcon,A,b,Aeq,beq,...
    lb,ub,options)

Preamble.  The iterative display begins by displaying the results of "presolve." The presolve algorithm reduces the complexity of the original problem by identifying and removing redundant rows and columns in the linear constraint matrices and performing related simplifications of the problem. For example,

Presolving model
18018 rows, 26027 cols, 248579 nonzeros
15092 rows, 24343 cols, 217277 nonzeros
Objective function is integral with scale 1

Root Node Evaluation.  The root node is the linear programming solution of the problem, not considering any integer constraints. For a minimization problem, the objective function value of the root node is a lower bound on the objective function value of the solution to the problem including integer constraints. For a minimization problem, the upper bound (if any) comes from a feasible point with respect to all constraints. If there is no feasible point yet found, the upper bound is Inf.

The iterative display shows the size of the problem after presolve.

Solving MIP model with:
   15092 rows
   24343 cols (24343 binary, 0 integer, 0 implied int., 0 continuous)
   217277 nonzeros
  • binary is the number of binary variables.

  • integer is the number of integer variables.

  • implied int is the number of variables that are implied to be integer. For example, if x(1) is integer and x(1) + x(2) = 5, then x(2) is implied to be integer.

  • continuous is the number of continuous variables.

The total number of variables is the number of columns in the model.

Dynamic Constraint Creation.  The software starts by creating "dynamic constraints," which have three headers in the iterative display:

  • Cuts — Number of active cuts

  • inLp — Number of non-model rows in the LP matrix

  • Confl. — Number of conflicts

The software can further extend the constraints by restarting the dynamic constraint creation, which can create more constraints by starting the creation process from a new state.

In the last step before beginning the branch-and-bound process, the software reports the results of symmetry detection and the number of generators and orbitopes found. For example, this is the portion of the iterative display that appears in the preamble and dynamic constraint creation phase:

        Nodes      |    B&B Tree     |            Objective Bounds              |  Dynamic Constraints |       Work      
     Proc. InQueue |  Leaves   Expl. | BestBound       BestSol              Gap |   Cuts   InLp Confl. | LpIters     Time
 
         0       0         0   0.00%   0               inf                  inf        0      0      0         0     1.5s
         0       0         0   0.00%   0               inf                  inf        0      0      5      4934     3.6s
…
         0       0         0   0.00%   0               inf                  inf     4630    553    289    140296   159.6s
 
0.0% inactive integer columns, restarting
Model after restart has 15090 rows, 24338 cols (24338 bin., 0 int., 0 impl., 0 cont.), and 217075 nonzeros
 
         0       0         0   0.00%   0               inf                  inf      550      0      0    140296   160.5s
…
         0       0         0   0.00%   0               inf                  inf     5602    524    260    277323   318.0s
 
6.2% inactive integer columns, restarting
Model after restart has 14185 rows, 22816 cols (22816 bin., 0 int., 0 impl., 0 cont.), and 200423 nonzeros
 
         0       0         0   0.00%   0               inf                  inf      524      0      0    277323   318.6s
…
         0       0         0   0.00%   0               inf                  inf     4683    535    525    408393   505.8s
 
Symmetry detection completed in 3.1s
Found 215 generators and 12 full orbitope(s) acting on 730 columns

For information about symmetry detection and orbitopes, see Hojny, Pfetsch, and Schmitt [7] and Pfetsch and Rehn [12]. The software continues to add dynamic constraints as it proceeds with branch-and-bound steps, described next.

Branch-and-Bound.  The branch-and-bound method constructs a sequence of subproblems that attempt to converge to a solution of the MILP. The subproblems give a sequence of upper and lower bounds on the solution fTx. For a minimization problem, the first upper bound is any feasible solution, and the first lower bound is the solution to the relaxed problem.

During the branch-and-bound procedure, intlinprog gives the following iterative display.

        Nodes      |    B&B Tree     |            Objective Bounds              |  Dynamic Constraints |       Work      
     Proc. InQueue |  Leaves   Expl. | BestBound       BestSol              Gap |   Cuts   InLp Confl. | LpIters     Time
        72       0         2   1.56%   0               inf                  inf     4695    535    738    667009   686.9s
…
 T     271     107        51   1.56%   0               449              100.00%     6051    271   1767    792786   776.8s
 T     279     107        53   1.56%   0               439              100.00%     6053    271   1794    793241   777.5s
…
 L    1223     538       295   1.98%   0               434              100.00%     6984    243   7628     1689k  1580.6s
…
      1321     539       333   1.98%   0               434              100.00%     7029    243   8628     1898k  1650.6s
 
Restarting search from the root node
Model after restart has 13947 rows, 22426 cols (22426 bin., 0 int., 0 impl., 0 cont.), and 194633 nonzeros
 
      1323       0         0   0.00%   0               434              100.00%      243      0      0     1902k  1653.5s
      1323       0         0   0.00%   0               434              100.00%      243     76     10     1905k  1655.7s
…
      1694     173        52   0.00%   0               434              100.00%     9411    318   2220     3205k  2584.3s
 B    1710     167        55   0.00%   0               433              100.00%     9415    318   2263     3207k  2586.2s
      1726     224        56   0.00%   0               433              100.00%     9999    378   2307     3237k  2608.7s

The leftmost column shows a code indicating how the new feasible point was found for that row of the display:

    • L — While solving a sub-MIP problem during primal heuristics

    • T — During tree search, while evaluating a node

    • B — During branching

    • H — By heuristics

    • P — During startup, before solving MIP

    • C — By central rounding

    • R — By randomized rounding

    • S — While solving an LP

    • F — By Feasibility Pump

    • U — From an unbounded LP

Finish.  The reason that the algorithm stopped and a summary of results appear at the end of the iterative display.

Solving report
  Status            Time limit reached
  Primal bound      14
  Dual bound        0
  Gap               100% (tolerance: 0.01%)
  Solution status   feasible
                    14 (objective)
                    0 (bound viol.)
                    1.7763568394e-15 (int. viol.)
                    0 (row viol.)
  Timing            7200.02 (total)
                    3.08 (presolve)
                    0.00 (postsolve)
  Nodes             5830
  LP iterations     9963775 (total)
                    2657448 (strong br.)
                    324908 (separation)
                    2140011 (heuristics)
 
Solver stopped prematurely. Integer feasible point found.
 
Intlinprog stopped because it exceeded the time limit, options.MaxTime = 7200 . The intcon variables are integer within tolerance,
options.ConstraintTolerance = 1e-06.
  • Status — Reason the iterations stopped

  • Primal bound — Upper bound on objective function value for a minimization problem, lower bound for a maximization problem

  • Dual bound — Lower bound on objective function value for a maximization problem, upper bound for a minimization problem

  • Gap — Relative gap between primal and dual bounds

  • Solution status

    • Status of the solution

    • Objective function value, labeled (objective)

    • Maximum violation of the solution with respect to variable bounds, labeled (bound viol.)

    • Maximum violation of the integer-type variables from integer values, labeled (int. viol.)

    • Maximum violation of the solution with respect to the linear constraints, labeled (row viol.)

  • Timing — Timing of various solution phases in seconds

  • Nodes — Number of nodes explored

  • LP iterations — Number of linear program iterations by phase and in total

References

[1] Achterberg, T.,Bixby, R., Gu, Z., Rothberg, E., and Weninger, D. Presolve Reductions in Mixed Integer Programming. INFORMS J. Computing 32(2), 2019, pp. 473–506. Available at https://doi.org/10.1287/ijoc.2018.0857.

[2] Achterberg, T., T. Koch and A. Martin. Branching rules revisited. Operations Research Letters 33, 2005, pp. 42–54. Available at https://opus4.kobv.de/opus4-zib/files/788/ZR-04-13.pdf.

[3] Andersen, E. D., and Andersen, K. D. Presolving in linear programming. Mathematical Programming 71, pp. 221–245, 1995.

[4] Atamtürk, A., G. L. Nemhauser, M. W. P. Savelsbergh. Conflict graphs in solving integer programming problems. European Journal of Operational Research 121, 2000, pp. 40–55.

[5] Berthold, T. Primal Heuristics for Mixed Integer Programs. Technischen Universität Berlin, September 2006. Available at https://opus4.kobv.de/opus4-zib/files/1029/Berthold_Primal_Heuristics_For_Mixed_Integer_Programs.pdf.

[6] Cornuéjols, G. Valid inequalities for mixed integer linear programs. Mathematical Programming B, Vol. 112, pp. 3–44, 2008.

[7] Hojny, C., Pfetsch, M. E., Schmitt, A. Extended formulations for column-constrained orbitopes. TU Darmstadt, Department of Mathematics, Research Group Optimization, Dolivostr. 15, 64293 Darmstadt, June 2017. Available at https://optimization-online.org/2017/06/6092/.

[8] Danna, E., Rothberg, E., Le Pape, C. Exploring relaxation induced neighborhoods to improve MIP solutions. Mathematical Programming, Vol. 102, issue 1, pp. 71–90, 2005.

[9] Hendel, G. New Rounding and Propagation Heuristics for Mixed Integer Programming. Bachelor's thesis at Technische Universität Berlin, 2011. PDF available at https://opus4.kobv.de/opus4-zib/files/1332/bachelor_thesis_main.pdf.

[10] Mészáros C., and Suhl, U. H. Advanced preprocessing techniques for linear and quadratic programming. OR Spectrum, 25(4), pp. 575–595, 2003.

[11] Nemhauser, G. L. and Wolsey, L. A. Integer and Combinatorial Optimization. Wiley-Interscience, New York, 1999.

[12] Pfetsch, M. E. and Rehn, T. A computational comparison of symmetry handling methods for mixed integer programs. Mathematical Programming Computation (2019) 11:37–93. Available at https://mpc.zib.de/archive/2019/1/Pfetsch-Rehn2019_Article_AComputationalComparisonOfSymm.pdf.

[13] Wolsey, L. A. Integer Programming. Wiley-Interscience, New York, 1998.