The University of Melbourne

School of Mathematics and Statistics

MAST 20018 - Discrete Maths and Operations Research

A/Prof. Alysson M. Costa

School of Mathematics and Statistics

Office 149 - Peter Hall Building

Last updated: October 18, 2022

Contents

1 Introduction to Operations Research 4

2 Modelling in Linear Programming 18

3 The geometry of Linear Programming 34

4 The algebra of Linear Programming 45

5 Solving Linear Programs 68

6 The simplex tableau 84

7 Duality theory 106

8 Sensitivity analysis 154

1

Chapter 1

Introduction to Operations Research

2

Operations Research

“In a nutshell, operations research (O.R.) is the discipline of applying advanced analytical methods to help make better

decisions." INFORMS | What O.R. Is

Figure 1.1: The Operations Research approach [4]

3

Applications of Operations Research

Many applications can be found in areas such as:

Figure 1.2: Applications - Slide by Gurobi ©

4

Jobs in Operations Research

Given the multi-disciplinary nature of Operations Research, jobs in the area cover a number of different profiles. Overall, a

very employable student will have good coding, communication and mathematical skills. I have former students who have

worked at (focusing on Australian Businesses):

• Accenture

• AECOM

• Argon & Co

• Australian Department of Defence

• Biarri

• Cardinal Operations

• CSRIO

• INESC TEC - Institute for Systems and Computer Engineering, Technology and Science

• Intelligent Decision Support - Opturion Pty

• Neighbourlytics

• Scada Data Analytics

• Singapore-MIT Alliance for Research & Technology

• Transport Safety Victoria

• Victorian Centre for Data Insights

• Victoria Police

• Workforce analytics

• YPC Technologies

• For further opportunities, see Careers in Mathematics and Statistics

5

Branches of Operations Research

• Mathematical Programming,

• Dynamic Programming,

• Simulation,

• Heuristics and metaheuristics

• Stochastic Processes / Queuing Theory,

• Inventory Theory,

• Graph Theory,

• etc.

Figure 1.3: Holistic illustration of the disciplines and problems related to operations research. Image by Alex Elkjaer

Vasegaard

6

Mathematical Programming

Modelling and solving a problem using mathematics.

Figure 1.4: Subfields of Mathematical Optimisation: Retrieved from Wikipedia

7

Choosing the right model

Decision making models are mostly useful when they can be solved. Using a modelling framework allows for solving the

models with generic algorithms for that framework.

The choice of the modelling framework is often guided by the capacity of solution algorithms to solve realistic instances of

a problem.

Figure 1.5: Algorithms shed light on models

8

Linear Programming

Linear programming is a modelling framework in which constraints and objective function are linear expressions on

continuous decision variables.

= min

s.t.

≥ ,

≥ 0

Figure 1.6: Pictorial representation of a 2D Linear Program. Fig from Hartman-Baker et. al.

9

Find the cheapest diet that supply daily requirements of 2000 calories, 55g protein, 800mg calcium.

Food Serving size Energy (Cal) Protein (g) Calcium (mg) Price per serving ($)

Oatmeal 28g 110 4 2 0.30

Chicken 100g 205 32 12 2.40

Eggs 2 large 160 13 54 1.30

Whole milk 250ml 160 8 285 0.90

Cherry pie 170g 420 4 22 0.20

Pork and beans 260g 260 14 80 1.90

Example: The diet problem

Data:

• : set of foods

: cost per serving of food ,

• : set of nutrients

• = minimum required level of nutrient ,

• = maximum allowable level of nutrient .

• : amount of nutrient in food

Model:

= min

∑︁

∈

s.t.∑︁

∈

≥ min, ∈ ,∑︁

∈

≤ max, ∈ ,

≥ 0, ∈ , ∈ .

10

Solving the problem

import gurobipy as gp

from gurobipy import GRB

#DATA

Food, cal, protein, calcium, price = gp.multidict({

’oatmeal’: [110, 4, 2, .30],

’chicken’: [205, 32, 12, 2.40],

’egg’: [160, 13, 54, 1.30 ],

’milk’: [160, 8, 285, .90 ],

’pie’: [420, 4, 22, .20 ],

’pork’: [260, 14, 80, 1.90 ]})

r = {

’cal’: 2000,

’protein’: 55,

’calcium’: 800}

#MODEL

m = gp.Model("diet")

x = m.addVars(Food, name = ’x’)

calCons = m.addConstr( gp.quicksum( cal[f]*x[f] for f in Food) >= r[’cal’] )

calProt = m.addConstr( gp.quicksum( protein[f]*x[f] for f in Food) >= r[’protein’] )

calCalc = m.addConstr( gp.quicksum( calcium[f]*x[f] for f in Food) >= r[’calcium’] )

m.setObjective( gp.quicksum(price[f]*x[f] for f in Food ),GRB.MINIMIZE)

#Analysis

m.optimize()

# Display Solution

print(’SOLUTION:’)

for v in m.getVars():

if v.x > 1e-6:

print(" ", v.varName, v.x)

# Display optimal solution value

print(’ Cost: ’, m.objVal)

#Requirements provided

print(" Calories: ",sum( x[f].x*cal[f] for f in Food ) )

print(" Protein : ", sum( x[f].x*protein[f] for f in Food ))

print(" Calcium : ", sum( x[f].x*calcium[f] for f in Food ))

Implementation

11

Set parameter Username

Academic license - for non-commercial use only - expires 2022-08-28

Gurobi Optimizer version 9.5.0 build v9.5.0rc5 (mac64[x86])

Thread count: 4 physical cores, 8 logical processors, using up to 8 threads

Optimize a model with 3 rows, 6 columns and 18 nonzeros

Model fingerprint: 0x1ed81c8f

Coefficient statistics:

Matrix range [2e+00, 4e+02]

Objective range [2e-01, 2e+00]

Bounds range [0e+00, 0e+00]

RHS range [6e+01, 2e+03]

Presolve removed 0 rows and 4 columns

Presolve time: 0.00s

Presolved: 3 rows, 2 columns, 6 nonzeros

Iteration Objective Primal Inf. Dual Inf. Time

0 0.0000000e+00 3.950000e+02 0.000000e+00 0s

2 3.7821577e+00 0.000000e+00 0.000000e+00 0s

Solved in 2 iterations and 0.00 seconds (0.00 work units)

Optimal objective 3.782157676e+00

SOLUTION:

x[milk] 2.0643153526970957

x[pie] 9.62136929460581

Cost: 3.7821576763485485

Calories: 4371.265560165975

Protein : 55.0

Calcium : 800.0

[Finished in 0.5s]

12

MAST 20018 - Unimelb Handbook

Learning outcomes:

• L1 Comprehend the essential features of problems encountered in Operations Research investigations.

• L2 Develop basic skills required to construct formal mathematical models for practical optimisation problems, and

those required to analyse settings from real-world applications;

• L3 Appreciate the extent and limitations of a number of Operations Research techniques for solving real-world

problems.

Generic skills:

• GS1 Problem-solving skills: the ability to engage with unfamiliar problems and identify relevant solution strategies;

• GS2 Analytical skills: the ability to construct and express logical arguments and to work in abstract or general terms

to increase the clarity and efficiency of analysis;

• GS3 Collaborative skills: the ability to work in a team; and

• GS4 Time-management skills: the ability to meet regular deadlines while balancing competing commitments

13

MAST 20018 - Syllabus (Operations Research)

1. Mathematical modelling.

2. Linear Programming.

Algebra of linear programming.

The simplex method.

Duality Theory.

14 15

Chapter 2

Modelling in Linear Programming

16

The right level of simplification

Finding the right level of simplification is key in modelling.

—————–

On Exactitude in Science - Jorge Luis Borges, Collected Fictions, translated by Andrew Hurley [1].

...In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a

City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the

Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point

with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that

that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and

Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in

all the Land there is no other Relic of the Disciplines of Geography.

Suarez Miranda,Viajes devarones prudentes, Libro IV,Cap. XLV, Lerida, 1658.

—

17

Ceci n’est pas une pipe

We model nature and society to understand how they work. Modelling is the art of interpreting and representing reality.

Figure 2.1: Rene Magritte - The treachery of images

18

Modelling frameworks

Nonlinear Programming

Both constraints and objective function can be nonlinear expressions on the decision variables.

= min ()

s.t.

() ≤ 0,

ℎ() = 0

∈ R.

Figure 2.2: (, ) = () · cos()

19

Linear Programming

The constraints and objective function are linear expressions on continuous decision variables.

= min

s.t.

≥ ,

≥ 0

Figure 2.3: Vertices and simplex path [2]

20

Mixed-Integer Programming

The constraints and objective function are linear expressions on continuous or discrete decision variables.

= min +

s.t.

+ ≥ ,

≥ 0, ∈ R

≥ 0, ∈ Z

If all variables are binary ( ∈ {0, 1}), we have Binary Programming.

If all variables are integer ( ∈ {0, 1}), we have Integer Programming.

Figure 2.4: IP polytope with LP relaxation. Reprinted from Wikimedia Commons, 2010

21

Modelling and solving problems computationally

There are many linear programming solvers available both commercially and free for academic use.

Examples of commercial solvers (also with free academic licences):

• CPLEX,

• Gurobi,

• FICO Xpress

Examples of open source solvers:

• Cbc,

• GLPK,

The implementation of models is often made with mathematical programming languages or packages such as AMPL,

Julia/JuMP, Gurobipy. Common mathematical software, such as MATLAB also have packages for solving linear programs.

Microsoft Excel Solver also solves linear (integer) and nonlinear optimization problems.

22

Modelling in Linear Programming

23

The company Dovetail produces two kinds of matches: long and short ones. The company makes a profit of 3 (x$1,000)

for every 100,000 boxes of long matches, and 2 (x$1,000) for every 100,000 boxes of short matches. The company has

one machine that can produce both long and short matches, with a total of at most 9 (x100,000) boxes per year. For

the production of matches the company needs wood and boxes: three cubic meters of wood are needed for 100,000

boxes of long matches, and one cubic meter of wood is needed for 100,000 boxes of short matches. The company has

18 cubic meters of wood available for the next year. Moreover, Dovetail has 7 (x100,000) boxes for long matches, and

6 (x100,000) for short matches available at its production site. The company wants to maximize its profit in the next

year. It is assumed that Dovetail can sell any amount it produces.

Example: The Dovetail problem [3]

Variables:

1 ≥ 0: number of 100,000 boxes of long matches produced,

2 ≥ 0: number of 100,000 boxes of short matches produced.

Model:

Let be the maximum profit the company can obtain given such technological and resource constraints.

= max 31 + 22 (2.1)

s.t.

1 + 2 ≤ 9, (2.2)

31 + 2 ≤ 18, (2.3)

1 ≤ 7, (2.4)

2 ≤ 6, (2.5)

1, 2 ≥ 0.

24

Adelaide has two water catchment storage facilities. Storage 1 can store up to 400 megalitres per day. Storage 2 can

store up to 500 megalitres per day. Three secondary dams are supplied from these two facilities: Barossa needs at least

300 megalitres per day, Happy Valley needs at least 200 megalitres per day and Kangaroo Creek needs at least 400

megalitres per day

The distances between storage facilities and the secondary dams are (in kilometres).

Dam 1 Dam 2 Dam 3

Storage Facility 1 75 40 85

Storage Facility 2 20 55 75

Formulate a linear program that minimises the total transportation distances to meet the demands of the secondary

dams, such that capacities of the storage facilities are not violated.

A transportation problem

• - quantity transported from storage to damn .

min 7511 + 4012 + 8513 + 2021 + 5522 + 7523

such that

11 + 12 + 13 ≤ 400

21 + 22 + 23 ≤ 500

11 + 21 ≥ 300

12 + 22 ≥ 200

13 + 23 ≥ 400

11, 12, 13, 21, 22, 23 ≥ 0.

25

import gurobipy as gp

from gurobipy import GRB

#DATA

Storages = ["Storage 1","Storage 2"]

Damns = ["Baroosa","Kangaroo Creek", "Happy Valley"]

Cap = {

"Storage 1": 400,

"Storage 2": 500}

Dem = {

"Baroosa": 300,

"Kangaroo Creek": 400,

"Happy Valley": 200}

dist ={

("Storage 1", "Baroosa"): 75,

("Storage 1", "Happy Valley"): 40,

("Storage 1", "Kangaroo Creek"): 85,

("Storage 2", "Baroosa"): 20,

("Storage 2", "Happy Valley"): 55,

("Storage 2", "Kangaroo Creek"): 75,

}

#MODEL

m = gp.Model("transp")

x = m.addVars(Storages,Damns, name = ’x’)

demCons = m.addConstrs( gp.quicksum( x[s,d] for s in Storages) >= Dem[d] for d in Damns )

SupCons = m.addConstrs( gp.quicksum( x[s,d] for d in Damns) <= Cap[s] for s in Storages)

m.setObjective( gp.quicksum( dist[s,d]*x[s,d] for s in Storages for d in Damns ) )

# Analysis

m.optimize()

print(’SOLUTION:’)

print(’ Cost: ’, m.objVal)

for v in m.getVars():

if v.x > 1e-6:

print(" ", v.varName, v.x)

Implementation

26

The classical transportation problem

Data:

• - set of origins,

- capacity of origin ∈ .

• - set of destinations,

- demand of destination ∈ ,

• - distance between origin ∈ and destination ∈ ,

Variables:

• - quantity transported between ∈ and ∈ .

Model:

min

∑︁

∈

∑︁

∈

,

such that ∑︁

∈

≥ , ∈ ,∑︁

∈

≤ , ∈ ,

∈ R×+ .

27

import gurobipy as gp

from gurobipy import GRB

import random

random.seed(1)

#DATA

n = 10 #number of ’storages’

m = 50 #number of ’damns’

Storages = range(n)

Damns = range(m)

cap = {}

for s in Storages:

cap[s] = random.randint(100,1000)

dem = {}

for d in Damns:

dem[d] = random.randint(10,100)

dist = {}

for s in Storages:

for d in Damns:

dist[s,d] = random.randint(10,100)

#MODEL

m = gp.Model("transp")

x = m.addVars(Storages,Damns, name = ’x’)

demCons = m.addConstrs( gp.quicksum( x[s,d] for s in Storages) >= dem[d] for d in Damns )

supCons = m.addConstrs( gp.quicksum( x[s,d] for d in Damns) <= cap[s] for s in Storages)

m.setObjective( gp.quicksum( dist[s,d]*x[s,d] for s in Storages for d in Damns ) )

m.optimize()

print(’SOLUTION:’)

print(’ Cost: ’, m.objVal)

Implementation

28

A steel company must decide how to allocate next week’s time on a rolling mill, which is a machine that takes slabs

of steel and can produce either bands (at 200 tonnes/hour) or coils (at 140 tonnes/hour). Bands and coils can be sold

for $25/tonne and $30/tonne respectively. Based on currently booked orders, the company must produce at least 6000

tonnes of bands and 4000 tonnes of coils. Given that there are 40 hours of production time this week, how many tonnes

of bands and coils should be produced to yield the greatest revenue?

Production planning:

• - number of products produced, ∈ {bands, coils}

max 25 + 30

such that

200

+

140

≤ 40,

≥ 6000,

≥ 4000,

29

import gurobipy as gp

from gurobipy import GRB

#DATA

sellprice = {

"bands": 25,

"coils": 30,

}

#MODEL

m = gp.Model("prod")

b = m.addVar(name = ’b’)

c = m.addVar(name = ’c’)

m.addConstr( b/200 + c/140 <= 80)

m.addConstr( b >= 6000)

m.addConstr( c >= 4000)

m.setObjective( sellprice[’bands’]*b + sellprice[’coils’]*c , GRB.MAXIMIZE )

# Analysis

m.optimize()

print(’SOLUTION:’)

print(’ Profit: ’, m.objVal)

for v in m.getVars():

if v.x > 1e-6:

print(" ", v.varName, v.x)

#Checking

print("Capacity constraint lhs: ", b.x/200 + c.x/140 )

Implementation

30

A generic production problem

Data:

- set of items,

- minimum demand of item ∈ ,

- profit for unit sold of item ∈ ,

- set of resources,

- capacity of resource ∈ ,

- ammount of resource ∈ used in the production of item ∈ ,

Variables:

ammount produced of item ,

Model:

Max =

∑︁

∈

subject to

∑︁

∈

≤ ∈ ,

≥ ∈ .

31

import gurobipy as gp

from gurobipy import GRB

import random

random.seed(1)

#DATA

n = 100 #number of ’products’

m = 10 #number of ’resources’

I = range(n)

J = range(m)

cap = {}

for j in J:

cap[j] = random.randint(1000,2000)

dem = {}

for i in I:

dem[i] = random.randint(1,5)

a = {}

p = {}

for i in I:

p[i] = random.randint(5,10)

for j in J:

a[i,j] = random.randint(1,3)

#MODEL

m = gp.Model("production")

x = m.addVars(I, name = ’x’, lb=dem)

capCons = m.addConstrs( gp.quicksum( a[i,j]*x[i] for i in I) <= cap[j] for j in J)

m.setObjective( gp.quicksum( p[i]*x[i] for i in I ), GRB.MAXIMIZE )

m.optimize()

print(’ Profit: ’, m.objVal)

for v in m.getVars():

if v.x > 1e-6:

print(" ", v.varName, v.x)

#Checking

for j in J:

print(f"Capacity constraints {j}: { sum(a[i,j]*x[i].x for i in I) } <= {cap[j]} ", )

Implementation

Bynaries, Integers...

Mixed integer programming is built on the shoulders of linear programming. What a reason to understand linear programming.

= min +

s.t.

+ ≥ ,

≥ 0, ∈ R

≥ 0, ∈ Z

32

Chapter 3

The geometry of Linear Programming

Definitions and 2D visualisation

In this section, we will define some basic concepts while using the 2D space to gain insight on their meaning before moving

to higher dimensions.

For an LP problem with n variables, a vector ∈ R is called a feasible solution if it satisfies all constraints of the

problem. The set of all feasible solutions of an LP problem is called the feasible region.

Feasible solution

The set of all feasible solutions of an LP problem is called the feasible region.

Feasible region

33

Consider the problem

∗ := max = 41 + 32

subject to

21 + 2 ≤ 40 (production machine time)

1 + 2 ≤ 30 (production packaging time)

1 ≤ 15 (market demand)

1 ≥ 0

2 ≥ 0.

What is the space of feasible region?

Example: Drawing the feasible region in 2D

34

The geometry in higher dimensions

We are interested in studying problems that often have thousands (if not millions) of variables. A 2D visualisation can not

go very far. We need to develop algebraic structures that will allow us to ‘see’ in higher dimensions.

Let (1) , ..., () be a collection of points in R.

A point such that

= 1

(1) + 2 (2) + ... + () ,

is a linear combination of (1) , ..., () .

Linear combination

A point such that

= 1

(1) + 2 (2) + ... + () ,

where 0 ≤ ≤ 1 and ∑ = 1,

is a convex combination of (1) , ..., () .

Definition: Convex combination

35

The line segment joining two points , in R is given by all points in the convex combination of and .

Example: Line segment

A point in the line segment between and is given by:

= + (1 − )

for some 0 ≤ ≤ 1.

This follows the definition above with 1 = , 2 = (1 − ).

Figure 3.1: A line segment connecting points and in the plane.

36

A set ∈ R is a hyperplane if = { ∈ R | = } for some nonzero ∈ R and some ∈ R.

Hyperplanes

A set ′ ⊆ R is a (closed) halfspace if ′ = { ∈ R | ≥ } for some nonzero ∈ R and some ∈ R

Halfspaces

21 + 32 = 5 is an hyperplane in R2,

21 + 32 ≤ 5 is an hyperspace in R2,

Example:

A subset C of R is convex if for any two points , ∈ C then any point in the convex combination of and is also in

C.

Definition: Convex set

37

Hyperplanes and their half-spaces are convex sets.

Theorem: Hyperplanes and halfspaces define convex sts

Proof:

38

Let C1, . . . ,C be a collection of convex sets. Then

⋂

=1

C

is a convex set.

That is, the intersection of any finite number of convex sets is a convex set.

Theorem: The intersection of convex sets is convex

Proof:

39

A polytope is a set that can be expressed as the intersection of a finite number of closed half-spaces.

Definition: Polytopes

Show that polytopes are convex sets.

Example:

40

The graphical method for 2D LPs

We can use a graphical method to solve LP problems in two dimensions. Different level curves of the objective function can

be obtaibed by = , for fixed values of .

Figure 3.2: Level curves

The graphical method looks for the point in the feasible region that touches the level curve with maximum value of .

41

Sove the problem with the graphical method.

max = 3 + 2

s.t.

2 + ≤ 10

+ ≤ 8

, ≥ 0

Example: Graphical Solution method

• Profit z = constant i.e. 3 + 2 = , defines a line, called an iso-profit line.

• Changing the constant moves the line, but all iso-profit lines are parallel.

• Increasing the constant moves the line in the direction of improving profit.

• Increase the constant until the iso-profit line no longer intersects the feasible region. The highest constant value for

which the iso-profit line intersects the feasible region must be the optimal value, and points in the intersection are

optimal solutions.

42

For any LP, one of the three statements is true.

1. The LP is infeasible.

2. The optimal value of the LP is∞ (i.e., the LP does not have a bounded optimum).

3. A basic feasible solution exists that achieves the optimal value.

Consider the problem

∗ := max = 41 + 32

21 + 2 ≤ 40

1 + 2 ≤ 30

1 ≤ 15

41 + 32 ≥ 150

1 ≥ 0

2 ≥ 0.

Infeasible LP

Figure 3.3: Infeasible region.

There is no intersection among the constraints. No feasible solution exists.

Consider the problem

∗ := max = 41 + 32

41 + 32 ≥ 101

1 ≥ 0

2 ≥ 0.

Unbounded and optimal LPs

In this case the feasible region is unbounded. The objective function increases as we ‘head further into the unbounded

region’, we say that the problem is unbounded and no optimal solution exists.

Depending on the objective function, the optimisation problem on unbounded regions can have an optimal solution. For

example, consider max = −41 − 32.

43

Figure 3.4: Multiple optima.

44

Chapter 4

The algebra of Linear Programming

max =

∑︁

=1

111 + 122 + ... + 1 ≤ 1

211 + 222 + ... + 2 ≤ 2

...

...

...

11 + 22 + ... + ≤

≥ 0, = 1, ...,

with 1, 2, . . . , ≥ 0.

Standard form of a LP

max =

∑︁

=1

111 + 122 + ... + 1 + +1 = 1

211 + 222 + ... + 2 + +2 = 2

...

...

...

11 + 22 + ... + + + =

≥ 0, = 1, ..., +

with 1, 2, . . . , ≥ 0.

Canonical form of a LP

45

Write the model below in canonical form:

= max 31 + 22

s.t.

1 + 2 ≤ 9,

31 + 2 ≤ 18,

1 ≤ 7,

2 ≤ 6,

1, 2 ≥ 0.

Example

46

General procedure for transforming any LP problem to canonical form

1. For a minimisation problem, convert it to a maximisation problem by multiplying the objective function by −1

2. For each negative RHS coefficient < 0, multiply the corresponding functional constraint by −1 (and change the

direction of the inequality)

3. For each variable which is unrestricted in sign, introduce new variables (1) ,

(2)

≥ 0 and replace by (1) − (2)

4. For each “≥" functional constraint, introduce a surplus variable so as to obtain a “=" constraint

5. For each “=" functional constraint, introduce an artificial variable (we will learn how to deal with these intruders later)

6. For each “≤" functional constraint, introduce a slack variable

7. Now the problem is in canonical form whose basic variables are the slack and artificial variables

47

Convert the problem to canonical form.

min = 1 − 2

1 + 2 ≤ −5

1 ≥ −10

2 = 3

1 ≥ 0,

2 ∈ R

Example:

48

Matrix representation

= max

s.t.

≤ ,

≥ 0.

=

1 1

3 1

1 0

0 1

, =

[

1

2

]

, =

9

18

7

6

, =

[

3

2

]

.

= max

s.t.

[, ] = ,

≥ 0.

[, ] ==

1 1 1 0 0 0

3 1 0 1 0 0

1 0 0 0 1 0

0 1 0 0 0 1

, =

1

2

3

4

5

6

=

9

18

7

6

, =

3

2

0

0

0

0

.

49

General matrix representations of SF and CF

=

11 12 · · · 1

21 22 · · · 2

...

...

...

...

1 2 · · ·

, =

1

2

...

, =

1

2

...

,

Standard form:

max =

≤

≥ 0

Canonical form:

max =

+ = ,[

]

≥ 0.

50

Solutions in the standard and in the canonical form have a one-to-one correspondence. To any solution ∈ to

max =

≤

≥ 0

There is an equivalent solution =

[

]

to

max =

+ = ,

=

[

]

≥ 0.

51

We will often extend the definition of a model in canonical form to any problem with variables and equality

constraints such that an indentity matrix can be found among its columns.

Canonical form of a LP (Revisited)

Show that the LP problem

max = 21 − 32 + 43 + 5

21 + 02 + 13 + 34 + 05 = 6

−1 + 12 + 03 + 04 + 05 = 2

21 + 02 + 03 + 64 + 15 = 2

1, 2, 3, 4, 5 ≥ 0

is in canonical form.

Example:

The LP is in canonical form because the 3rd, 2nd and 5th columns of the coefficient matrix are, respectively,

1

0

0

,

0

1

0

,

0

0

1

Note that (0, 2, 6, 0, 2) is a feasible solution to this LP problem.

52

The space defined by linear constraints = is convex.

What if we had ≥ constraints? What if we need variables to assume negative values? What if our objective function is

a minimisation one?

Convexity of { : = }

Proof: Assume 1 and 2 are such that they respect the system of linear equations.

Consider a point 3 in the convex combination of 1 and 2:

3 = 1 + (1 − )2,with 0 ≤ ≤ 1.

Therefore, 3 = (1 + (1 − )2),

3 = 1 + (1 − )2,

3 = + (1 − ),

3 = ,

and therefore it also respects the equalities, showing that a set of equality constraints defines a convex set.

53

A point of a convex set C is said to be an extreme point or vertex of C if whenever we write

= + (1 − ),

with , ∈ C and 0 < < 1, then = = .

Geometrically, this means that there are no points and in C (different from ) with lying on the line segment

connecting and .

Extreme points

Figure 4.1: Feasible extreme points for region

Three ways to define an extreme point [3]:

• The vector x0 ∈ is called a vertex of if and only if there are independent hyperplanes in the collection

{1, . . . , +} that intersect at x0.

• The vector x0 ∈ is called a vertex of if and only if there is a hyperplane (not necessarily one of 1, . . . , +)

with corresponding halfspace + such that ⊆ + and ∩ = {x0}.

• The vector x0 ∈ is called a vertex of if and only if there are no two distinct x′, x′′ ∈ such that x0 = x′+ (1−)x′′

for some ∈ (0, 1)

54

If a linear programming problem has exactly one optimal solution, then this solution must be an extreme point of the

feasible region.

Theorem: Extreme points and optimality

Proof: We shall prove this theorem by contradiction.

Assume, to the contrary, that the problem has exactly one optimal solution, call it , that is not an extreme point of the

feasible region. Since is not an extreme point, there are two distinct feasible solutions, say and , which are not equal to

and a scalar , 0 < < 1, such that

= + (1 − ).

If we rewrite the objective function in terms of and rather than , we obtain

() = ( + (1 − )).

Hence

() =

∑︁

=1

=

∑︁

=1

[

+ (1 − )

]

=

∑︁

=1

+ (1 − )

∑︁

=1

= () + (1 − ) ().

Now, because 0 < < 1, there are only three possibilities

1. () < () < ()

2. () < () < ()

3. () = () = ()

Since is an optimal solution, the first two possibilities cannot occur (why?). Thus, the third case must hold.

But this contradicts the assertion that is the only optimal solution. Our initial assumption that is not an extreme point

must be wrong.

Prove that if a linear programming problem has more than one optimal solution, it must have infinitely many optimal

solutions. Furthermore, the set of optimal solutions is convex.

Multiple optima

55

Consider a problem in canonical form:

max =

∑︁

=1

111 + 122 + ... + 1 + +1 = 1

211 + 222 + ... + 2 + +2 = 2

...

...

...

11 + 22 + ... + + + =

≥ 0, = 1, ..., +

A basic feasible solution is a solution 1, . . . + that respects all constraints and have at least components set at

zero.

Basic Solutions

56

The LP

max 31 + 42

21 + 2 ≤ 4

1 + 22 ≤ 3

1, 2 ≥ 0,

is in standard form.

The corresponding canonical form is

max 31 + 42

21 + 2 + 3 = 4

1 + 22 + 4 = 3

1, 2, 3, 4 ≥ 0

The ‘trivial basic solution’, derived by selecting original variables at zero is = (0, 0, 4, 3). (What are the conditions

for this trivial basic solution to be feasible?)

Find another basic solution.

Example: basic feasible solutions

Suppose we select 2 and 3 to be basic. Then, the reduced system is

2 + 3 = 4

22 = 3.

This yields the basic feasible solution

= (0, 3/2, 5/2, 0).

57

A collection of vectors (1) , ..., () in R is said to be linearly independent if

1

(1) + 2 (2) + ... + () = 0,

implies that

1 = 2 = ... = = 0.

Linear independence

The rank of the matrix refers to the number of linearly independent rows or columns in the matrix.

Rank of a matrix

Consider a system with variables and constraints:

=

≥ 0

We often partition the system in the following format:

××1 + ×−−×1 = ×1

We choose linear independent columns for . This makes a square matrix with complete rank (invertible) and,

therefore variables can be written in terms of variables as:

=

−1 + −1 , (General solution of the system)

Basic and Non-basic matrices

58

For the system = represented with Basic and Non-basic matrices and :

(×)×1 + ×(−) (−)×1 = ×1

A solution is basic if (for a full rank ), we set all variables at zero. Moreover, if all components of are ≥ 0, the

solution is a feasible basic solution.

Basic solutions in matrix format

The problem:

max 31 + 42

21 + 2 + 3 = 4

1 + 22 + 4 = 3

1, 2, 3, 4 ≥ 0

can be represented as:

= , ≥ 0

for:

=

[

2 1 1 0

1 2 0 1

]

, =

1

2

3

4

=

[

4

3

]

, =

3

4

0

0

0

0

.

1(2 = 0)

2(1 = 0)

3 = 0

4 = 0

59

Trying all possible matrices , = índices of variables in B, = índices of variables in :

• = {1, 2}, = {3, 4}

=

[

2 1

1 2

]

, =

[

1 0

0 1

]

,

for = 0, = −1 =

[

5/3

2/3

]

, (feasible basic).

• = {1, 3}, = {2, 4}

=

[

2 1

1 0

]

, =

[

1 0

2 1

]

,

for = 0, = −1 =

[

3

−2

]

, (infeasible basic).

• = {1, 4}, = {2, 3}

=

[

2 0

1 1

]

, =

[

1 1

2 0

]

,

for = 0, = −1 =

[

2

1

]

, (feasible basic).

• = {2, 3}, = {1, 4}

=

[

1 1

2 0

]

, =

[

2 0

1 1

]

,

for = 0, = −1 =

[

3/2

5/2

]

, (feasible basic).

• = {2, 4}, = {1, 3}

=

[

1 0

2 1

]

, =

[

2 1

1 0

]

,

for = 0, = −1 =

[

4

−5

]

, (infeasible basic).

• = {3, 4}, = {1, 2}

=

[

1 0

0 1

]

, =

[

2 1

1 2

]

,

for = 0, = −1 =

[

4

3

]

, (feasible basic).

60

Consider a system of variables and constraints in standard form ≤ , ≥ . A solution ∈ R for this system

is an extreme point of the feasible region = { : ≤ } if and only if a solution with addition of slack variables

=

[

]

∈ R+ is feasible for the problem in canonical form [, ] = and has at least variables at zero.

Theorem: Basic solutions and extreme points

First we observe that for to be feasible, = − .

Proof: (if⇐): In this part we have to show that if is a basic solution for the problem in canonical form, in an extreme

point of region . Assume, by contradiction, that is not an extreme point. Therefore, can be written as a convex

combination of two distinct feasible extreme points:

= + (1 − ) , with 0 ≤ ≤ 1.

By defining slack variables = − and = − , we can write solutions =

[

]

and =

[

]

to the

problem in canonical form.

Then:

= − = − ( + (1 − ))

= ( − ) + (1 − ) ( − )

=

+ (1 − )

That is, the convexity constraints are also valid for the slack variables, yielding:

= + (1 − ) , with 0 ≤ ≤ 1.

61

By hypothesis, is a basic feasible solution and therefore, we can rearrange [, ] = [, ], such that the last components

of are those that are certainly at zero, we have:

1

...

0

...

0

=

1

...

+1

...

+

+ (1 − )

1

...

+1

...

+

With 0 ≤ ≤ 1, we can conclude that the last components of and must also be zero. Therefore, as both and

respect the system of equalities:

=

=

=

−1

which contradicts the hypothesis that and are distinct and therefore, is an extreme point.

Proof: ( only if⇒)

62

Consider the linear programming problem in canonical form

max

=

∑︁

=1

111 + ... + 1(+)+ = 1

211 + .... + 2(+)+ = 2

...

...

...

11 + ... + (+)+ =

≥ 0, = 1, . . . , +

(a) If this problem has a feasible solution, then it must have a basic feasible solution.

(b) If this problem has an optimal solution, then it must have an optimal basic feasible solution.

Fundamental Theorem of Linear Programming

Proof - Part (a): Let be a feasible solution. We need to prove that the problem has a basic feasible solution.

We partition into a strictly positive part +, with entries, and a zero part. Without loss of generality we may assume that

this partition is = [+, 0] .We partition = [+, 0] accordingly, then = becomes

[

+, 0

] [+

0

]

= ,

that is,

++ = .

Case (1): The columns of + are linearly independent.

In this case, ≤ , and either

• = , in which case is a basic feasible solution by definition, or

• < , in which case we can add − further columns to make a linearly independent set.

In either case we obtain a basic feasible solution.

Case (2): The columns of + are linearly dependent. In this case we will iteratively construct a new feasible solution such

that the corresponding + (for the new feasible solution) has independent columns and then reduce to Case (1).

63

Since the columns of + are linearly dependent, there is a non-zero + such that

++ = 0.

Without loss of generality, we can assume > 0 for at least one .

For a sufficiently small value of ≥ 0, the following solution is feasible:

[+, 0]

[

+ − +

0

]

= +(+ − +) =

as long as ≥ 0 and + ≥ + (that is, ≥ for each with > 0), (+ − +, 0) is nonnegative.

Now choose

∗ = min

{

: > 0

}

(> 0).

Then (+ − ∗+, 0) is a feasible solution. Moreover, one component of + − ∗+ is zero, with the rest nonnegative.

Thus we have constructed a new feasible solution (+ − +, 0) whose number of positive components is reduced by at least

one from that of . If the columns of the new + with respect to this new feasible solution are linearly independent, the LP

problem has a feasible solution by Case (1).

Otherwise, we replace by (+− +, 0) and repeat the argument above. We then get a third feasible solution whose number

of positive components is reduced by at least one from that of (+ − +, 0). We continue like this, all the time reducing the

number of positive components by at least one.

The process must terminate in a finite number of iterations with a feasible solution such that the columns of the corresponding

+ are linearly independent. We then conclude by Case (1) that the LP problem has at least one basic feasible solution.

64

Part (b) Starting with an optimal feasible solution , we can construct an optimal basic feasible solution in exactly the

same way as in Part (a).

Case (1): The columns of + are linearly independent. From the proof of Part (a) we see that is an optimal basic feasible

solution.

Case (2): The columns of + are linearly dependent.

Using the notation from Part (a),

[

+ − +

0

]

=

∑︁

=1

( − )

=

∑︁

=1

−

∑︁

=1

= −

∑︁

=1

.

Since is an optimal solution (that is, achieves the maximum) and (+ − +, 0) is a feasible solution, we have

∑︁

=1

≥ 0.

Take a such that

0 < ≤ min

{−

: < 0

}

.

(The right-hand side is defined as∞ if there are no that are negative.)

Following the logic of Case (2) of Part (a), we can show that (+ + +, 0) is a feasible solution whose objective value is

[

+ + +

0

]

=

∑︁

=1

( + )

= +

∑︁

=1

Since is optimal and > 0, we must have

∑︁

=1

≤ 0.

Combining the two inequalities above, we have

∑︁

=1

= 0,

which implies

[

+ − +

0

]

= .

In other words, the value of the objective function is the same at (+ − +, 0) as at (+, 0).

Following the same recursive procedure as in Case (2) of Part (a), we can construct a basic feasible solution with the same

objective value as (and so is optimal) such that the columns of the corresponding + are linearly independent, reducing to

Case (1).

65

Corollaries

• If the feasible region of the linear programming problem is not empty then it must have at least one extreme point.

• The feasible region possesses at most a finite number of extreme points (Can you suggest an upper bound?)

• If the linear programming problem possesses an optimal solution, then there is an optimal solution which is an extreme

point of the feasible region.

66

67

Chapter 5

Solving Linear Programs

Solution methods for Linear Programming

Brute force:

Test all intersections of constraints (possible extreme points). Check they are feasible. Calculate objective function value.

Choose the best.

Since extreme points of the feasible region correspond to basic feasible solutions, for an LP in canonical form with

+ variables and constraints, there are at most(

+

)

=

( + )!

!!

extreme points.

For large and , this yields a very large number of possible extreme points. This is an example of the Curse of

Dimensionality.

For example for = = 50, the maximum number of extreme points is 1029.

It is impractical to find an optimal solution by enumerating all extreme points.

How efficient is the brute force method?

68

Interior point methods:

Ellipsoid algorithm: guaranteed convergence in number of iterations polynomial in the number of variables and constraints.

Difficult to make computationally effective in practice.

Predictor-corrector methods: follow central path through the feasible region. Call nonlinear equation solver such as Newton’s

method to take next steps. Can be very effective in practice.

The simplex algorithm:

Start at one extreme point. Identify "neighbouring" extreme point that improves the objective value. Move to that extreme

point. Repeat until no moves possible.

Figure 5.1: Pictorial representation of simplex and interior point methods

69

Complexity

Interior point methods have been proven to have polynomial complexity. The Simplex Algorithm, in turn, is an exponential

time algorithm. This means that in the worst case the time taken to solve a problem of size can grow exponentially with .

However, we focus on the simplex method:

• In almost all practical cases, the Simplex Algorithm does not seem to perform anywhere nearly as badly as this worst

case bound. For example, the Simplex Algorithm can often solve practical problems with hundreds of thousands (if

not more) of constraints and variables.

• Simplex algorithms will have an important application in the solution of mixed-integer programs (see MAST 90014 -

Optimisation for Industry).

70

The simplex algorithm

Dantzig’s simplex algorithm starts at a feasible extreme point and at each step it finds a neighbour feasible extreme point

with better solution. If no neighbour with better objective function can be found, the current solution is optimal.

Algorithm 1 Simplex algorithm main ideas

1: Find a feasible extreme point.

2: If the point is optimal: stop!

3: Find a better neighbour extreme point and go to 2.

71

Step 2: Checking optimality

Assume that from Step 1 we have obtained a feasible extreme point with a partition = [|]. Writing the system of

equalities in terms of this partition, we obtain:

+ =

is invertible and, therefore, the basic variables can be expressed in terms of the variables:

=

−1 − −1 . (5.1)

This is called the general solution of the system.

The cost of a solution is given by = = + . Using the general solution of the system, this cost can be rewritten

as:

= (−1 − −1 ) +

and, rearranging the terms:

=

−1 + ( − −1)

The first term

−1 is the value of the basic solution associated with this partition. The vector coeficients multiplying

the non-basic variables are called the reduced costs. They capture the net change in the objective function of increasing each

of the non-basic variables.

For a given partition = (, ), the vector of reduced costs associated with non-basic variables is given by:

=

− −1

The reduced cost of a specific variable ∈ is given by:

= − −1,

Optimality Condition: a solution of a maximisation (minimisation) problem is optimal if all the reduced costs are negative

(positive).

= max 31 + 22

s.t.

1 + 2 + 1 = 9,

31 + 2 + 2 = 18,

1 + 3 = 7,

2 + 4 = 6,

1, 2, 1, 2, 3, 4 ≥ 0.

Example: The Dovetail problem

72

max

[

3 2 0 0 0 0

]

1

2

3

4

5

6

s.t.

1 1 1 0 0 0

3 1 0 1 0 0

1 0 0 0 1 0

0 1 0 0 0 1

1

2

3

4

5

6

=

9

18

7

6

Assume we know point (1, 2) = (0, 0) (out of the basis) is a feasible extreme point, = [3, 4, 5, 6] , = [1, 2] ,

then:

=

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

, =

1 1

3 1

1 0

0 1

, =

0

0

0

0

, =

[

3

2

]

and

=

− −1 =

[

3 2

]

Therefore the solution is not optimal since we are maximising and there are positive reduced costs.

1(2 = 0)

2(1 = 0)

6 = 0

5 = 0

3 = 0

4 = 0

73

Step 3: Finding a better solution

If the current solution of the maximisation (minimisation) program is not optimal, there exists a non-basic variable ∈

with a reduced cost < 0( > 0) reduced cost that will enter the basis.

Increasing this variable while maintaining all other non-basic variables at zero produces the following changes in the basic

variables:

=

−1 − −1,

= − ,

where is the column coefficients in of variable , is the value of the basic solution at the current extreme point

and the vector is defined as = −1. Note that is a vector that captures the decrease in the value of each of the basic

components as increases.

Ratio test:

The maximum possible increase in is given by the first basic variable (if any) to zero out. Let be the index of this

variable:

= argmin

| ∈ , >0

{

}

.

where is the value of the basic variable ∈ at the extreme point associated with partition = [|].

In the new iteration, the entering variable will occupy the position of in the basis, which will be moved to the set of

non-basic variables.

74

Finding the neighbouring point and iterating.

Example: The Dovetail problem (continued)

Let the entering variable be = 1,

=

3

4

5

6

=

−1 =

9.0

18

7

6

, =

−11 =

1.0

3

1

0

The index of the leaving variable is

= argmin

| ∈ , >0

{

}

= argmin

∈{3,4,5}

{

9

1

,

18

3

,

7

1

}

which gives 4 as leaving variable. Notice that we did not use 6 above as variable 6 does not change when we increase 1.

We obtain the new basis with partition = [3, 1, 5, 6] , = [4, 2] .

Iteration 2

We stoped at the extreme point associated with the partition = [3, 1, 5, 6] , = [4, 2] .

=

1 1 0 0

0 3 0 0

0 1 1 0

0 0 0 1

, =

0 1

1 1

0 0

0 1

, =

0

3

0

0

,

[

0

2

]

and

=

− −1 =

[ −1.0 1.0 ]

We compute the leaving variable:

=

3

4

5

6

=

−1 =

3.0

6.0

1.0

6.0

, =

−12 =

2/3

1/3

−1/3

1

= argmin

| ∈ , >0

{

}

= argmin

∈{3,4,6}

{

3

2/3 ,

6

1/3 ,

6

1

}

75

which gives 3 as leaving variable. Notice that we did not use 5 above as variable 5 decreases when we increase 2.

The partition for the next iteration is: = [2, 1, 5, 6] , = [4, 3] .

Iteration 3

= [2, 1, 5, 6] , = [4, 3] .

=

1 1 0 0

1 3 0 0

0 1 1 0

0 0 0 1

, =

0 1

1 0

0 0

0 0

, =

2

3

0

0

, =

[

0

0

]

and

=

[ −0.5 −1.5 ]

Since we are maximising and all reduced costs are negative, the optimal solution is given by the extreme point associated

with the current partition.

76

Step 1: Finding a feasible basic solution

The simplex algorithm starts at a feasible extreme point and at each step it finds another neighbouring feasible extreme point

with better solution. If no neighbour with better objective function can be found, the current solution is optimal.

1(2 = 0)

2(1 = 0)

6 = 0

5 = 0

3 = 0

4 = 0

Consider the linear program:

= max

s.t.

= , ∀ ≥ 0

≥ 0.

We can easily create an artificial feasible solution for a new problem with ∈ R additional variables.

= max

s.t.

+ = , ∀ ≥ 0

≥ 0.

A feasible basic solution for the new problem is given by = = 0 and = .

Big M and two-phase approach

Two different ways can be used to remove these undesired variables as the simplex algorithm iterates:

Big-M

The new variables are added with prohibitive costs () to the original objetive function:

= max −

s.t.

+ = ,

≥ 0.

77

Phase-1

A new problem is solved in order to eliminate the artificial variables from the current solution:

= max −

s.t.

+ = ,

≥ 0.

In both cases, if any artificial variable is present in the optimal solution basis, the original problem is infeasible.

78

Solve the following problem using the simplex method.

= max 31 + 22

s.t.

1 + 2 >= 1,

1 + 2 <= 2,

1, 2 ≥ 0.

Example:

1

2

Standard form of the constraints:

1 + 2 − 1 + = 1,

1 + 2 + 2 = 2,

1, 2, 1, 2, ≥ 0.

Note that we needed to add a surplus variable. This means that the trivial solution with original variables at zero is no longer

feasible. In order to obtain the canonical form, an artificial variable was also added.

79

We have two options.

Big-M

We rewrite the problem as:

= max 31 + 22 −

s.t.

1 + 2 − 1 + = 1,

1 + 2 + 2 = 2,

1, 2, 1, 2, ≥ 0.

M is a large value (how large does it need to be?). Variable will leave the basis at some point and can then be removed, as

it strongly damages the objective function (what does it mean if it does not leave the basis?).

Phase-1

We write an initial problem as:

= max −

s.t.

1 + 2 − 1 + = 1,

1 + 2 + 2 = 2,

1, 2, 1, 2, ≥ 0.

M is a large value (how large does it need to be?). Variable will leave the basis at some point and can then be removed, as

it strongly damages the objective function (what does it mean if it does not leave the basis?). When leaves the basis, we

have a basis on the original variables that can be used to solve the original problem with the correct objective function.

Note: we will see numerical examples for both methods in the next chapter.

80

Convergence of the simplex algorithm

The simplex algorithm can cycle between basis. The reason is that for degenerate problems, the basis might change but the

value of the objective function remain the same. That is, a basic variable whose value is zero leaves the basis. In order to

avoid that, we can use a simple rule.

Bland’s Rule (Smallest Subscript Rule)

• Among all variables with negative reduced costs, choose the variable with the smallest subscript as the entering variable

(i.e. choose the leftmost negative reduced cost).

• In using the Ratio Test, if two or more variables compete for leaving the basis (i.e. with the smallest ratio), choose the

variable with the smallest subscript as the leaving variable.

When applying Bland’s Rule the Simplex Algorithm terminates in a finite number of iterations.

Bland’s rule

We will see an example of cycling in the next chapter.

81

Chapter 6

The simplex tableau

The classical tableau method

Historically, the SimplexMethod has been solved using a table that makes the pivoting operations easier to carry out manually.

(= tableau).

There are a number of different layouts for these tables. In this subject, all of us should use the layout specified in these

lecture slides.

Creating the tableau

Consider the generic problem with + variables and equality constraints.

BV 1 ... +1 ... + RHS

+1 11 ... 1 1,+1 ... 1,+ 1

+2 21 ... 2 2,+1 ... 2,+ 2

· · ·

+ 1 ... ,+1 ... ,+

−1 ... − −+1 ... −+

82

Note that the rowm contains the negative of the variable costs in the objective function. Assume that variables 1 to are

non-basic and variables +1 to + are basic. The tableau can then be seen as:

RHS

− − 0

83

We want to write this problem in canonical form. It’s easy to do so by left-multiplying the core of the table by −1, to obtain:

RHS

−1 −1 −1

− − 0

or

RHS

−1 −1

− − 0

Now, to the last row, add the core of the tableau left-multiplied by to obtain:

RHS

−1 −1

− + −1 − + 0

or

RHS

−1 −1

− + −1 0 −1

84

We can identify several important quantities in the tableau below (visit the last chapter if needed):

RHS

−1 −1

− + −1 0 −1

• −1 is the solution of the basic variables associated with the partition [, ].

•

−1 is the objective solution of the problem for partition [, ].

• Each column of −1 is the rate of change in the basic variables when non-basic variable is increased.

• Each element of − + −1 is the negative of the reduced cost for non-basic variable .

• We refer to the last row as the -row.

• The tableau above is in canonical form.

• In general, a tableau is said to be in canonical form if

– all ≥ 0,

– of the columns form the × identity matrix (not necessarily in order),

– the z-row values for the basic variables are zero.

85

Write the initial tableau for:

max = 41 + 32

such that

21 + 2 + 3 = 40

1 + 2 + 4 = 30

1 + 5 = 15

1, 2, 3, 4, 5 ≥ 0

considering variables 3, 4 and 5 to be basic.

Example:

=

1 0 1

0 1 0

0 0 1

, =

2 1

1 1

1 0

, =

0

0

0

, =

[

4

3

]

− = −1 − =

[ −4

−3

]

, = −1 =

40

30

15

BV 1 2 3 4 5 RHS

3 2 1 1 0 0 40

4 1 1 0 1 0 30

5 1 0 0 0 1 15

−4 −3 0 0 0 0

Observing that = , the tableau can be promptly writtten by just copying the problem data.

86

Write the initial tableau for:

max = 41 + 32

such that

21 + 2 + 3 = 40

1 + 2 + 4 = 30

1 + 5 = 15

1, 2, 3, 4, 5 ≥ 0

considering variables 1, 2 and 3 to be basic.

Example:

Now we have:

=

2 1 1

1 1 0

1 0 0

, =

0 0

1 0

0 1

, =

4

3

0

, =

[

0

0

]

, =

40

30

15

−1 =

0 0 1

0 1 −1

1 −1 −1

.

−1 =

1 0 0 0 1

0 1 0 1 −1

0 0 1 −1 −1

, −1 =

15

15

−5

, =

[

3. 1.

]

and the tableau reads:

BV 1 2 3 4 5 RHS

1 1 0 0 0 1 15

2 0 1 0 1 -1 15

3 0 0 1 -1 -1 -5

0 0 0 3 1 105

Note: Note that the solution is unfeasible, since one of the variables is negative. Such a tableau would never appear when

solving the simplex algorithm, as the method moves from feasible solution to feasible solution.

The simplex algorithm in tableau format

The tableau contains all we need to perform the simplex algorithm. To check for optimality, just inspect the -row. If all

values are positive (negative), then your maximisation (minimisation) problem is optimal.

Otherwise, choose an appropriate non-basic variable to enter the basis. The leaving variable can be obtained by applying the

ratio test between the RHS and the column of the entering variable.

Once both entering and leaving variables are obtained, perform a pivot operation on the element on the row of the leaving

variable and column of the entering variable to build the canonical tableau for the new basis.

87

Solve the problem using the simplex tableu method.

max

= 601 + 352 + 203

81 + 62 + 3 ≤ 48

121 + 72 + 43 ≤ 60

41 + 32 + 3 ≤ 16

2 ≤ 5

1, 2, 3 ≥ 0

Example

We add slack variables and construct the canonical form.

max

= 601 + 352 + 203 + 04 + 05 + 06 + 07

81 + 62 + 3 + 4 = 48

121 + 72 + 43 + 5 = 60

41 + 32 + 3 + 6 = 16

2 + 7 = 5

1, 2, 3, 4, 5, 6, 7 ≥ 0

88

We rewrite this formulation as a Simplex Tableau.

BV 1 2 3 4 5 6 7 RHS

4 8 6 1 1 0 0 0 48

5 12 7 4 0 1 0 0 60

6 4 3 1 0 0 1 0 16

7 0 1 0 0 0 0 1 5

−60 −35 −20 0 0 0 0 0

BV 1 2 3 4 5 6 7 RHS

4 0 0 −1 1 0 −2 0 16

5 0 −2 1 0 1 −3 0 12

1 1 3/4 1/4 0 0 1/4 0 4

7 0 1 0 0 0 0 1 5

0 10 −5 0 0 15 0 240

BV 1 2 3 4 5 6 7 RHS

4 0 −2 0 1 1 −5 0 28

3 0 −2 1 0 1 −3 0 12

1 1 5/4 0 0 −1/4 1 0 1

7 0 1 0 0 0 0 1 5

0 0 0 0 5 0 0 300

All the reduced costs are negative (remember that the z-row contains the negative of the reduced costs as defined in the

previous chapter). Thus, we Stop! The current solution is optimal. The optimal solution is

∗ = (1, 0, 12, 28, 0, 0, 5)

and the optimal value of the objective function is equal to

∗ = 300

89

Solve the problem using the simplex tableu method.

max

= 31 + 22

21 + 2 ≤ 18

21 + 32 ≤ 42

31 + 2 ≤ 24

1, 2 ≥ 0

Example

We add slack variables and construct the canonical form.

max = 31 + 22 + 03 + 04 + 05

21 + 2 + 3 = 18

21 + 32 + 4 = 42

31 + 2 + 5 = 24

1, 2, 3, 4, 5 ≥ 0

90

We rewrite this formulation as a Simplex Tableau.

BV 1 2 3 4 5 RHS

3 2 1 1 0 0 18

4 2 3 0 1 0 42

5 4 3 1 0 0 24

−3 −2 0 0 0 0

BV 1 2 3 4 5 RHS

3 2 0 1/3 1 0 −2/3

4 0 7/3 0 1 −2/3 26

1 1 1/3 0 0 1/3 8

0 −1 0 0 1 24

BV 1 2 3 4 5 RHS

2 0 1 3 0 -2 6

4 0 0 -7 1 4 12

1 1 0 -1 0 1 6

0 0 3 0 -1 30

BV 1 2 3 4 5 RHS

2 0 1 −1/2 1/2 0 12

5 0 0 −7/4 1/4 4 12

1 1 0 3/4 −1/4 0 3

0 0 5/4 1/4 0 33

All the reduced costs are negative (remember that the z-row contains the negative of the reduced costs as defined in the

previous chapter). Thus, we Stop! The current solution is optimal. The optimal solution is

∗ = (3, 12, 0, 0, 12)

and the optimal value of the objective function is equal to

∗ = 300

91

Revisiting some pending points

Cycling and Bland’s rule

In the previous chapter, we mentioned that without any care in the selection of entering and leaving variables, the simplex

could cycle. Below in an example of cycling after six pivot operations:

Table 1

BV 1 2 3 4 5 6 7 RHS

5 1/2 −11/2 −5/2 9 1 0 0 0

6 1/2 −3/2 −1/2 1 0 1 0 0

7 1 0 0 0 0 0 1 1

−10 57 9 24 0 0 0 0

Table 2

BV 1 2 3 4 5 6 7 RHS

1 1 -11 −5 18 2 0 0 0

6 0 4 2 −8 −1 1 0 0

7 0 11 5 −18 −2 0 1 1

0 −53 −41 204 20 0 0 0

Table 3

BV 1 2 3 4 5 6 7 RHS

1 1 0 1/2 −4 −3/4 11/4 0 0

2 0 1 1/2 −2 −1/4 1/4 0 0

7 0 0 −1/2 4 3/4 −11/4 1 1

0 0 −29/2 98 27/4 53/4 0 0

Table 4

BV 1 2 3 4 5 6 7 RHS

3 2 0 1 −8 −3/2 11/2 0 0

2 −1 1 0 2 1/2 −5/2 0 0

7 1 0 0 0 0 0 1 1

29 0 0 −18 −15 93 0 0

Table 5

92

BV 1 2 3 4 5 6 7 RHS

3 −2 4 1 0 1/2 −9/2 0 0

4 −1/2 1/2 0 1 1/4 −5/4 0 0

7 1 0 0 0 0 0 1 1

20 9 0 0 −21/2 141/2 0 0

Table 6

BV 1 2 3 4 5 6 7 RHS

5 −4 8 2 0 1 −9 0 0

4 1/2 −3/2 −1/2 1 0 1 0 0

7 1 0 0 0 0 0 1 1

−22 93 21 0 0 -24 0 0

Table 7 = Table 1

BV 1 2 3 4 5 6 7 RHS

5 1/2 −11/2 −5/2 9 1 0 0 0

6 1/2 −3/2 −1/2 1 0 1 0 0

7 1 0 0 0 0 0 1 1

−10 57 9 24 0 0 0 0

Table 7 = Table 1⇒ Table 8 = Table 2⇒ Table 9 = Table 3, . . ..

93

Using Bland’s rule

Bland’s Rule (Smallest Subscript Rule)

• Among all possible entering variables, choose the variable with the smallest subscript as the entering variable (i.e. choose the

leftmost negative reduced cost).

• In using the Ratio Test, if two or more variables compete for leaving the basis (i.e. with the smallest ratio), choose the variable

with the smallest subscript as the leaving variable.

Table 6

BV 1 2 3 4 5 6 7 RHS

5 −4 8 2 0 1 −9 0 0

4 1/2 −3/2 −1/2 1 0 1 0 0

7 1 0 0 0 0 0 1 1

−22 93 21 0 0 -24 0 0

Table 7’

BV 1 2 3 4 5 6 7 RHS

5 0 -4 -2 8 1 −1 0 0

1 1 −3 −1 2 0 2 0 0

7 0 3 1 -2 0 -2 1 1

0 27 -1 44 0 20 0 0

Table 8’

BV 1 2 3 4 5 6 7 RHS

5 0 2 0 4 1 −5 2 2

1 1 0 0 0 0 0 1 1

3 0 3 1 -2 0 -2 1 1

0 30 0 42 0 18 1 1

Success!

94

Identifying halting cases in the simplex algorithm

Unbounded problems

This refers to the situation when the objective function can take arbitrarily large value (for a maximisation problem) or arbitrarily small

value (for a minimisation problem) in the feasible region. Therefore no optimal solution exists.

In terms of the simplex tableau, this case occurs when the value of the new basic variable can be increased indefinitely without causing

any one of the old basic variables to become negative. In other words, this occurs when the column of the new basic variable consists of

non-positive elements only, that is, when the Ratio Test fails to identify a variable to be taken out of the basis.

The tableau below shows that the problem is unbounded:

BV 1 2 3 4 5 RHS

5 0 −6 0 1 1 25

1 1 −2 0 6 0 40

3 0 0 1 1 0 10

0 −3 0 2 0 80

Example:

95

Two-phase method

We saw that any problem that can be written in standard form accepts the trivial basic solution in which all original variables are set at

zero. For non-standard problems, we mentioned two methods: the big M method and the two-phase algorithm.

Solve the problem below:

max = −31 − 52

1 + 4 = 4

22 + 1 = 12

31 + 22 − 3 + 2 = 18

1, 2, 3, 4, 1, 2 ≥ 0

where 1 and 2 are artificial variables.

Example: two-phase method

Let

= sum of the artificial variables

and

∗ = minimum value of subject to the constraints.

Because the artificial variables must satisfy the nonnegativity constraint, ∗ = 0 if and only if all the artificial variables are equal to

zero.

Phase 1

Original problem in canonical form with artifical variables:

max = −31 − 52

1 + 4 = 4

22 + 1 = 12

31 + 22 − 3 + 2 = 18

1, 2, 3, 4, 1, 2 ≥ 0

The goal in Phase 1 is to minimize = 1 + 2, where 1 and 2 are artificial variables.

max = −1 − 2

1 + 4 = 4

22 + 1 = 12

31 + 22 − 3 + 2 = 18

1, 2, 3, 4, 1, 2 ≥ 0

96

BV 1 2 3 4 1 2 RHS

4 1 0 0 1 0 0 4

1 0 2 0 0 1 0 12

2 3 2 −1 0 0 1 18

0 0 0 0 1 1 0

Note that this tableau is not in canonical form, because there are non-zero coefficients in the -row corresponding to the basic variables.

To restore the canonical form, we subtract Row 2 and Row 3 from the -row.

BV 1 2 3 4 1 2 RHS

4 1 0 0 1 0 0 4

1 0 2 0 0 1 0 12

2 3 2 −1 0 0 1 18

-3 -4 1 0 0 0 -30

BV 1 2 3 4 1 2 RHS

4 1 0 0 1 0 0 4

2 0 1 0 0 1/2 0 6

2 3 0 −1 0 −1 1 6

-3 0 1 0 2 0 -6

BV 1 2 3 4 1 2 RHS

4 0 0 1/3 1 1/3 −1/3 2

2 0 1 0 0 1/2 0 6

1 1 0 −1/3 0 −1/3 1/3 2

0 0 0 0 1 1 0

All the artificial variables are out of the basis. The minimum value of is ∗ = 0. This is the end of Phase 1.

97

Phase-2

Note that we now have a basic feasible solution for the original problem, obtained by ignoring the 1 and 2-columns. We now put back

the original objective function

= −31 − 52

BV 1 2 3 4 RHS

4 0 0 1/3 1 2

2 0 1 0 0 6

1 1 0 −1/3 0 2

3 5 0 0 0

Note that this tableau is not in canonical form. To restore canonical form we add (−3)× Row 3 and (−5)× Row 2 to the -row. After

these row operations we obtain:

BV 1 2 3 4 RHS

4 0 0 1/3 1 2

2 0 1 0 0 6

1 1 0 −1/3 0 2

0 0 1 0 −36

This tableau satisfies the Optimality Criterion. So the corresponding solution is already optimal. The optimal solution is = (2, 6, 0, 2)

and the optimal value of the objective function is = −36.

Note: The example above is atypical in the sense that no iteration is needed in Phase 2. In general, if the initial tableau (in canonical

form) for Phase 2 does not meet the Optimality Criterion, then we need to run the Simplex Algorithm beginning with this tableau until

we find an optimal solution for the original problem or discover that the problem is unbounded.

98

Solve the problem below:

max = 801 + 602 + 423

21 + 32 + 3 ≤ 12

51 + 62 + 33 ≥ 15

21 − 32 + 3 = 8

1, 2, 3 ≥ 0

Example 2: two-phase method

The problem is equivalent to

max = 801 + 602 + 423

21 + 32 + 3 + 4 = 12

51 + 62 + 33 − 5 + 1 = 15

21 − 32 + 3 + 2 = 8

1, 2, 3, 4, 5, 1, 2 ≥ 0

where 4 is a slack variable, 5 is a surplus variable and 1 and 2 are artificial variables.

min = 1 + 2

21 + 32 + 3 + 4 = 12

51 + 62 + 33 − 5 + 1 = 15

21 − 32 + 3 + 2 = 8

1, 2, 3, 4, 5, 1, 2 ≥ 0

99

Phase-1

Initial tableau (not in canonical form)

1 2 3 4 5 1 2 RHS

4 2 3 1 1 0 0 0 12

1 5 6 3 0 −1 1 0 15

2 2 −3 1 0 0 0 1 8

0 0 0 0 0 −1 −1 0

Restore the canonical form by adding Rows 2 and 3 to the -row. Initial tableau in canonical form:

1 2 3 4 5 1 2 RHS

4 2 3 1 1 0 0 0 12

1 5 6 3 0 −1 1 0 15

2 2 −3 1 0 0 0 1 8

7 3 4 0 −1 0 0 23

Tableau after pivoting on ( = 2, = 1) :

1 2 3 4 5 1 2 RHS

4 0 3/5 −1/5 1 2/5 −2/5 0 6

1 1 6/5 3/5 0 −1/5 1/5 0 3

2 0 −27/5 −1/5 0 2/5 −2/5 1 2

0 −27/5 −1/5 0 2/5 −7/5 0 2

Tableau after pivoting on ( = 3, = 5):

1 2 3 4 5 1 2 RHS

4 0 6 0 1 0 0 −1 4

1 1 −3/2 1/2 0 0 0 1/2 4

5 0 −27/2 −1/2 0 1 −1 5/2 5

0 0 0 0 0 −1 −1 0

The optimal value is ∗ = 0, and the artificial variables 1 and 2 have been driven out of the basis. This is the end of Phase

1.

100

Phase-2

Initial tableau for Phase 2 (not in canonical form):

1 2 3 4 5 RHS

4 0 6 0 1 0 4

1 1 −3/2 1/2 0 0 4

5 0 −27/2 −1/2 0 1 5

−80 −60 −42 0 0 0

Restore the canonical form by adding 80 times Row 2 to the -row:

1 2 3 4 5 RHS

4 0 6 0 1 0 4

1 1 −3/2 1/2 0 0 4

5 0 −27/2 −1/2 0 1 5

0 −180 −2 0 0 320

Tableau after pivoting on ( = 1, = 2):

1 2 3 4 5 RHS

2 0 1 0 1/6 0 2/3

1 1 0 1/2 1/4 0 5

5 0 0 −1/2 9/4 1 14

0 0 −2 30 0 440

Tableau after pivoting on ( = 2, = 3):

1 2 3 4 5 RHS

2 0 1 0 1/6 0 2/3

3 2 0 1 1/2 0 10

5 1 0 0 5/2 1 19

4 0 0 31 0 460

This is the end of Phase 2. The optimal solution is (1, 2, 3, 4, 5) = (0, 2/3, 10, 0, 19). The optimal value of the objective

function is ∗ = 460.

101

Possible results of phase-1

Let ∗ be the optimal value of obtained in Phase 1.

• Case 1: If ∗ = 0 and all the artificial variables are non-basic, then we have a basic feasible solution to the original

problem. Continue with Phase 2.

• Case 2: If ∗ = 0 but at least one artificial variable is in the basis, then there are two possible cases

– Case 2a: we can use pivot operations to take all artificial variables out of the basis and then continue with Phase

2.

– Case 2b: the constraint corresponding to the zero artificial variable is redundant.

• Case 3: If ∗ > 0, the problem is infeasible (that is, the feasible region is empty).

102

Suppose we get the following tableau in Phase 1, where 1 and 2 are artificial variables.

1 2 3 4 1 2 RHS

4 0 11 6 1 0 0 2

1 0 −1/2 −1 0 1 −1 0

1 1 −1 −1 0 0 1 3

0 1/2 1 0 0 2 0

Example: Case 2a

We have ∗ = 0 but 1 remains as a basic variable. Note that there exist nonzero entries in the 1-row and the columns of

non-artificial variables, i.e. −1/2 and −1. So we can pivot on one of these entries to drive 1 out of the basis.

Pivoting on −1/2 yields:

1 2 3 4 1 2 RHS

4 0 0 −16 1 22 −22 2

2 0 1 2 0 −2 2 0

1 1 0 1 0 −2 3 3

0 0 0 0 1 1 0

(You can choose to pivot on −1 in the previous tableau.)

Now we have ∗ = 0 and all artificial variables are non-basic. So we can proceed to Phase 2 just as in Case 1.

103

Suppose we get the following tableau in Phase 1, where 1, 2 and 3 are artificial variables.

1 2 3 4 1 2 3 RHS

2 0 1 −2 2 3 0 −1 4

2 0 0 0 0 1 1 −1 0

1 1 0 6 −10 −11 0 5 28

0 0 0 0 1 0 3 0

Example: Case 2b

We have ∗ = 0 but 2 remains as a basic variable. Unlike the previous example, all entries in the 2-row and the columns

of non-artificial variables are zero. So we cannot pivot on such entries to drive 2 out of the basis.

But note that the constraint containing 2 is redundant. So we can ignore the 2-row and continue with Phase 2.

104

Chapter 7

Duality theory

Introduction

Duality is a very elegant and important concept within the field of operations research, just as in many other branches of

mathematics.

In operations research, the theory of dualitywas first developed in relation to linear programming, but it hasmany applications,

and perhaps even a more natural and intuitive interpretation, in several related areas such as nonlinear programming, network

optimisation and game theory.

In particular, duality is widely used to develop sophisticated methods in mixed-integer programming.

In LP, it provides an interesting economical interpretation that we shall explore in detail.

105

A small company is being set up to engage in the production of office furniture. The company proposes to manufacture

tables, desks and chairs. The production of a table requires 8 kgs of wood and 5 kgs of metal and is sold for $80, a

desk uses 6 kgs of wood and 4 kgs of metal and is sold for $60, and a chair requires 4 kgs of both metal and wood and

is sold for $50.

The best production strategy for this company, given that only 100 kgs of wood and 60 kgs of metal are available each

week can be obtained with a LP:

Let 1 be the number of tables manufactured each week, 2 be the number of desks manufactured each week and 3 be

the number of chairs manufactured each week. Note that 1, 2 and 3 have to be nonnegative. The LP reads:

max

= 801 + 602 + 503

subject to

81 + 62 + 43 ≤ 100

51 + 42 + 43 ≤ 60

1, 2, 3 ≥ 0

(1, 2, 3 ∈ Z).

—

Assume now that there is a much bigger company hich has been the lone producer of this type of furniture for many

years. Its management does not appreciate the competition from the new company. This bigger company has decided

to attempt to put the new company out of business by buying up the new company’s resources of wood and metal.

The problem for the large company is to decide on the prices that it should offer for the wood and metal.

Example: Killer aquisition

106

Let 1 be the price, in dollars, offered for a kg of wood and 2 be the price, in dollars, offered for a kg of metal. Clearly

1 and 2 have to be nonnegative. Then the total expense of the buyout is 1001 + 602. The company obviously wants to

minimise this.

What are the constraints on 1 and 2?

It takes 8 kgs of wood and 5 kgs of metal to make a table. It would cost the large company 81 + 52 to buy this 8 kgs of

wood and 5 kgs of metal.

On the other hand, the small company makes $80 from selling the table. If 81 + 52 is less than this amount, then the small

company has the potential to come back to the resource supplier with a better offer, even if it manufactures only tables. Thus

we need 81 + 52 ≥ 80.

Similarly we have 61 + 42 ≥ 60 and 41 + 42 ≥ 50.

This leads to

min

= 1001 + 602

subject to

81 + 52 ≥ 80

61 + 42 ≥ 60

41 + 42 ≥ 50

1, 2 ≥ 0.

107

An individual has a choice of two types of food to eat, meat and potatoes, each offering varying degrees of nutritional

benefit. He has been warned by his doctor that he must receive at least 400 units of protein, 200 units of carbohydrates

and 100 units of fat from his daily diet.

Given that a kg of steak costs $10 and provides 80 units of protein, 20 units of carbohydrates and 30 units of fat, and

that a kg of potatoes costs $2 and provides 40 units of protein, 50 units of carbohydrates and 20 units of fat, he would

like to find the minimum cost diet which satisfies his nutritional requirements.

Let 1 be the number of kgs of steak that the man buys and 2 be the number of kgs of potatoes that he buys. These

have to be nonnegative. The problem that minimises the cost of the diet is:

min

= 101 + 22

subject to

801 + 402 ≥ 400

201 + 502 ≥ 200

301 + 202 ≥ 100

1, 2 ≥ 0.

—

A chemical company hopes to attract this man away from his present diet by offering him synthetic nutrients in the

form of pills. The problem for the company is to determine the prices per unit for their synthetic nutrients.

The company wants to generate the highest possible revenue from the transaction, but needs to keep the prices low

enough so that the man can’t satisfy his requirements from the natural foods in a cheaper way.

Example: Eating pills

108

Let 1 be the price, in dollars, offered for a unit of protein, 2 be the price, in dollars, offered for a unit of carbohydrate, and

3 be the price, in dollars, offered for a unit of fat.

Then the man will have to pay 4001 + 2002 + 1003 to satisfy his dietary requirements. The company wants to maximise

this.

If 801 + 202 + 303 > 10, then the man could do better by buying just steak, and if 401 + 502 + 203 > 2, then the man

could do better by buying just potatoes. Thus we need

801 + 202 + 303 ≤ 10

and

401 + 502 + 203 ≤ 2.

The Dual Problem reads:

max

= 4001 + 2002 + 1003

subject to

801 + 202 + 303 ≤ 10

401 + 502 + 203 ≤ 2

1, 2, 3 ≥ 0.

109

Consider again Dovetail problem:

= max 31 + 22

s.t.

1 + 2 ≤ 9,

31 + 2 ≤ 18,

1 ≤ 7,

2 ≤ 6,

1, 2 ≥ 0.

Assume another company, Salmonnose, that wants to rent the machine of Dovetail for one year and buy its resources.

Salmonnose wants to decide the minimum price it needs to pay for the rental and for the wood, boxes of long matches

and boxes of short matches.

Example: Dovetail and Salmonnose

110

We define variables indicating the prices to be paid for the resources:

• 1 : the price to rent one unit out of the 9 (× 100000) unities of capacity of Dovetail’s machine.

• 2 : price per unit (× 100000) of Dovetail’s wood.

• 3 : price per unit (× 100000) of Dovetail’s boxes of long matches.

• 4 : price per unit (× 100000) of Dovetail’s boxes of short matches.

The price Salmonnose needs to pay is given by 91 + 182 + 73 + 64. Consider now the production of long matches. Each

unit of long matches require 1 unit of production capacity, 3 units of wood, and 1 unit of boxes for long matches. The price

that Salmonnose has to pay cannot be less than the profit of Dovetail by selling this unit of long matches. Therefore,

1 + 32 + 3 ≥ 3

For short matches:

1 + 2 + 4 ≥ 2

And we obtain the dual:

= min 91 + 182 + 73 + 64

s.t.

1 + 32 + 3 ≥ 3,

1 + 2 + 4 ≥ 2,

1, 2, 3, 4 ≥ 0.

Note: each of the three examples describes some kind of competition between two decision makers. The primal/dual

relationship is often interpreted in the context of economics and game theory.

111

Developing the dual

Consider again the Dovetail problem:

= max 31 + 22 (7.1)

s.t.

1 + 2 ≤ 9, (7.2)

31 + 2 ≤ 18, (7.3)

1 ≤ 7, (7.4)

2 ≤ 6, (7.5)

1, 2 ≥ 0.

We will develop what is called the dual model associated with this formulation by using the concept of bounds. We will

use the constraints to obtain dual bounds for the original model and then write an optimisation problem to obtain the best

possible bound.

Consider constraint (7.2) multiplied by 3.

31 + 32 ≤ 27 (7.6)

the rhs of (7.6) is 31 + 32 and since 1 and 2 are non-negative in a feasible solution:

= 31 + 22 ≤ 31 + 32 ≤ 27

and we can conclude that 27 is an upper bound for . We could do the same for any other constraint or any linear combination

of constraints with non-negative multipliers, as long as the coefficient we get for 1 and 2 in the rhs are larger than the

original coefficients for these variables in the objective function. For example, consider:

(7.3) + (7.5)

31 + 22 ≤ 24

This bound is better than the one we had before (closer to the actual optimal solution of the model). What is the linear

combination of constraints that will provide the best possible bound ? Let the multipliers be variables ≥ 0 of this associated

optimisation problem, with constraints:

1 + 32 + 3 ≥ 3,

1 + 2 + 4 ≥ 2,

1, 2, 3, 4 ≥ 0.

The obtained bound is given by:

91 + 182 + 73 + 64

The complete optimisation problem of obtaining the best dual bound for Dovetail is therefore:

= min 91 + 182 + 73 + 64 (7.7)

s.t.

1 + 32 + 3 ≥ 3, (7.8)

1 + 2 + 4 ≥ 2, (7.9)

1, 2, 3, 4 ≥ 0.

112

This is called the dual model associated with the original (primal) Dovetail model.

The dual model uses the coefficients of the original objective function ( = [3, 2]) as the resource values and the original

resource values ( = [9, 18, 7, 6]) in its objective function.

113

The Dual of a Linear Programming Problem in “Standard Form"

Let us formalise our notions about the relationship between the primal and dual versions of linear programs. We start by

defining the relationship between a primal linear program in standard form and its dual. For a primal in standard form:

max

=

∑︁

=1

111 + 122 + ... + 1 ≤ 1

211 + 222 + ... + 2 ≤ 2

...

...

...

11 + 22 + ... + ≤

1, 2, ..., ≥ 0.

Note: we do not require all to be nonnegative.

The dual reads:

min

=

∑︁

=1

111 + 212 + ... + 1 ≥ 1

121 + 222 + ... + 2 ≥ 2

...

...

...

11 + 22 + ... + ≥

1, 2, ..., ≥ 0.

114

It is convenient to express a linear program and its dual using matrix notation.

Primal LP

max =

≤

≥ 0

Dual LP

min =

≥

≥ 0

= ( )×; =

1

...

; =

1

...

; =

1

...

; =

1

...

;

Once again, in this formulation we don’t impose the restriction that has to be nonnegative.

115

Obtain the dual problem for:

max

= 51 + 32 − 83 + 125

31 − 82 + 94 − 155 ≤ 20

181 + 52 − 83 + 44 + 125 ≤ 30

1, 2, . . . , 5 ≥ 0.

Example:

The dual reads:

min

= 201 + 302

31 + 182 ≥ 5

−81 + 52 ≥ 3

−82 ≥ −8

91 + 42 ≥ 0

−151 + 122 ≥ 12

1, 2 ≥ 0.

116

Obtain the dual problem for:

max

= 41 + 102 − 93

51 − 182 + 53 ≤ 15

−81 + 122 − 83 ≤ 8

121 − 42 + 83 ≤ 10

21 − 53 ≤ 5

1, 2, 3 ≥ 0.

The dual reads:

min

= 151 + 82 + 103 + 54

51 − 82 + 123 + 24 ≥ 4

−181 + 122 − 43 ≥ 10

51 − 82 + 83 − 54 ≥ −9

1, 2, 3, 4 ≥ 0.

117

The dual of the dual

There is nothing intrinsic to a linear problem for it to be called primal or dual. Primal and dual labels are relative. Indeed,

we shall now prove that the dual of the dual is the primal.

Consider a primal problem (P) with variables and constraints, given in standard form:

max =

≤

≥ 0

As we saw before, the dual of this problem (D) has variables and constraints and reads:

min =

≥

≥ 0

Let us write (D) as:

max− = −

− ≤ −

≥ 0

118

Note now that the last problem is in the standard form of a primal defined ealier (with the only difference that it has

variables and constraints. We can therefore apply the conversion rule to obtain the dual of (D), using variables ∈ R:

min−

− ≥ −

≥ 0

But now note that the dual of (D) can be written as:

max

≤

≥ 0

which is exactly the same as (P).

The dual of the dual is the primal.

119

Obtaining the dual for non-standard forms

We saw how to obtain the dual of a problem in standard form. Since any problem can be written in standard form (as long

as we do not require ≥ 0), we can always rewrite a problem in non-standard form and use the conversion rule to obtain its

dual.

Consider that we want to write the problem in standard form as in:

max =

≤

≥ 0

The issues we might need to deal with are:

• ≥ constraints;

• equality constraints;

• a min objective and/or

• unrestricted variables?

We will deal with each of these cases.

120

≥ constraints

We can turn ≥ constraints into ≤ constraints simply by multiplying through by −1 (which flips the inequality).

∑︁

=1

≥ ,

can be written as

−

∑︁

=1

≤ − .

Equality constraints

Every equality constraint can be expressed as the intersection of two inequality constraints.

∑︁

=1

=

is rewritten as

∑︁

=1

≤

and

∑︁

=1

≥ .

121

To deal with the “≥" constraint we use the rule above:

∑︁

=1

≤

and

−

∑︁

=1

≤ − .

So we end up with two “≤" constraints and we are happy.

Unrestricted variables

We replace the unrestricted variable by the difference of two non-negative variables.

For example, we replace 3 ∈ R with 3 = (1)3 − (2)3 where (1)3 , (2)3 ≥ 0.

Minimisation objective

We replace min () with max− ().

122

Obtain the dual associated with the problem below:

max

= 1 + 2 + 3

22 − 3 ≥ 4

1 − 32 + 43 = 5

1 − 22 ≤ 3

1, 2 ≥ 0, 3 ∈ R

Example

Using the rules, we can obtain a problem in standard form:

max

= 1 + 2 + (1)3 − (2)3

− 22 + (1)3 − (2)3 ≤ −4

1 − 32 + 4 (1)3 − 4 (2)3 ≤ 5

−1 + 32 − 4 (1)3 + 4 (2)3 ≤ −5

1 − 22 ≤ 3

1, 2,

(1)

3 ,

(2)

3 ≥ 0

123

And, associating dual variables 1, (1)2 ,

(2)

2 , 3 (note that these names are arbitrary, but they will make sense soon) and

apply the conventional rules to obtain its dual:

min

= −41 + 5 (1)2 − 5 (2)2 + 33

(1)2 − (2)2 + 3 ≥ 1

−21 − 3 (1)2 + 3 (2)2 − 23 ≥ 1

1 + 4 (1)2 − 4 (2)2 ≥ 1

−1 − 4 (1)2 + 4 (2)2 ≥ −1

1,

(1)

2 ,

(2)

2 , 3 ≥ 0

We can further simplify this dual by observing that (1)2 ≥ 0 and (2)2 ≥ 0 always appear with the same coefficients in the

problem and therefore can be written as a single variavle 2 ∈ R.

Moreover, the last two constraints define an equality constraint. With these observations, the dual (in non standard form)

reads:

min

= −41 + 52 + 33

2 + 3 ≥ 1

−21 − 32 − 23 ≥ 1

1 + 42 = 1

1, 3 ≥ 0, 2 ∈ R

124

The conversion process for non-standard problems can be streamlined by observing that:

• An equality constraint in the primal LP generates a dual variable that is unrestricted in sign.

• An unrestricted variable in the primal LP generates an equality constraint in the dual LP.

Primal LP Dual LP

max min

Constraint Variable

≤ form ≥ 0

= form ∈ R

Variable Constraint

≥ 0 ≥ form

∈ R = form

Costs Resources

Resources Costs

125

Let:

a: th row of = ( )×, = 1, . . . , .

a : th column of = ( )×, = 1, . . . , .

: × 1; : × 1; : × 1; : × 1

We can observe the relations:

Primal

max =

a ≤ , ∈

a = , ∈

≥ 0, ∈

∈ R , ∈

Dual

min =

a ≥ , ∈

a = , ∈

≥ 0, ∈

∈ R , ∈

, : row-index sets which partition {1, . . . , }

, : column-index sets which partition {1, . . . , }

126

Find the dual for the problem:

max

= 51 + 42

31 − 82 ≥ −6

1 + 62 = −5

81 + 32 = 10

2 ≥ 0, 1 ∈ R

Example

We first convert ≥ constraints to ≤ constraints.

max

= 51 + 42

−31 + 82 ≤ 6

1 + 62 = −5

81 + 32 = 10

2 ≥ 0, 1 ∈ R .

And then apply the streamlined conversion:

min

= 61 − 52 + 103

−31 + 2 + 83 = 5

81 + 62 + 33 ≥ 4

1 ≥ 0, 2, 3 ∈ R

127

The duality theorems - Weak Duality

The value of any feasible solution to the max problem (either you call it primal or dual) will always be a lower bound on the

min problem. Conversely, the value of any feasible solution to the min problem (either you call it primal or dual) will always

be an upper bound on the max problem.

This is known as weak duality.

Consider the following primal-dual pair:

Primal

max

=

≤

≥ 0

Dual

min

=

≥

≥ 0

If is a feasible solution to the primal problem and is a feasible solution to the dual problem, then

≤ .

Weak duality

Proof: Any feasible solution to the primal has ≥ 0 and ≤ and any feasible solution to the dual has ≥ 0 and

≥ . Therefore:

= ≤ () = ≤ = =

128

Corollaries from weak duality

• If the objective function of the primal is unbounded on the feasible region, then the dual has no feasible solutions.

• If the objective function of the dual is unbounded on the feasible region, then the primal has no feasible solutions.

• Let ∗ be a feasible solution to the primal and ∗ be a feasible solution to the dual. If

∗ = ∗

,

then ∗ must be an optimal solution to the primal and ∗ must be an optimal solution to the dual.

129

The duality theorems - Strong Duality

The strong duality theorem states that if a model has an optimal solution, its dual also has an optimal solution and their

values are the same and the dual can be obtained from the optimal primal basis. We will prove the following version of the

theorem:

If a primal model (P) is feasible and bounded, its dual (D) also has an optimal solution and both have the same objective

function.

Strong duality

Proof: consider the primal problem in canonical form, with the addition of slack variables. If the primal is feasible and

bounded, there is an optimal extreme point and we can express it as:

∗ =

[

∗

∗

]

, ∗ = 0

Consider a tentative solution = (−1) for the dual problem, we first show that this solution is feasible:

(i) Feasibility: we need to show that ≥ . This can be expressed as:

∗ =

[

]

≥

[

]

Since by hypothesis = (−1), we have:

∗ =

[

(−1)

(−1)

]

=

[

(−1)

]

≥

[

]

where the first part is an equality and the second part holds since the partition is optimal and therefore the reduced costs

given by − (−1) are non-positive.

(ii) Optimality:

= (−1) = (−1) = (∗) = ∗

and therefore, by Weak Duality, = (−1) must be optimal.

130

Solving the primal = Solving the dual

The proof of the strong duality theorem introduced some interesting information. Given an optimal solution for the primal

problem (and, consequently an optimal basis ), we can automatically find a solution for the dual problem, given by

= (−1).

Indeed, this information is already present in the optimal tableau of the simplex algorithm. Recall what we did in the tableau

simplex. Given a partition of variables into non-basic and basic

RHS

− − 0

We left-multiplyied the core of the table by −1, and added the core of the tableau left-multiplied by to obtain the tableau

in canonical form.

RHS

−1 −1

− + −1 0 −1

131

Now let us do the same operations, but starting with an initial tableau in the format we mostly use it (with the slack variables

in the last columns):

RHS

− 0 0

Given an optimal basis, we can left-multiply the core of the table by −1, and add the core of the tableau left-multiplied by

to the -row to obtain:

BV RHS

−1 −1 −1

−1 − −1 −1

Interpretation: By starting with the problem in standard form and adding slack variables, the final tableau will contain several

important information.

• The value of the primal variables will appear in the RHS.

• The objective value will appear in the -row, below the RHS.

• The inverse of the optimal basis will appear below the slack variables, in the core of the tableau.

• The value of the dual variables will appear in the -row, below the slack variables.

• The negative of the surplus variables of the dual will appear in the -row, below the original variables.

132

Consider the initial tableau below:

BV 1 2 3 4 5 RHS

4 8 6 4 1 0 100

5 5 4 4 0 1 60

−80 −60 −50 0 0 0

where 4 and 5 are slack variables (introduced when converting standard form to canonical form).

And its optimal equivalent:

BV 1 2 3 4 5 RHS

4 0 −2/5 −12/5 1 −8/5 4

1 1 4/5 4/5 0 1/5 12

0 4 14 0 16 960

Example

From this optimal tableau for the primal we know that

• (1, 2, 3) = (12, 0, 0) is an optimal solution to the primal problem;

• (1, 2) = (0, 16) is an optimal solution to the dual problem;

• and both problems have the optimal objective value ∗ = ∗ = 960.

133

Consider the primal-dual pair:

Primal Dual

max = 1 + 2 + 3 min = 1 + 2 + 3

s.t. s.t.

81 + 42 + 23 ≤ 1 81 + 22 + 3 ≥ 1

21 + 82 + 43 ≤ 1 41 + 82 + 23 ≥ 1

1 + 22 + 83 ≤ 1 21 + 42 + 83 ≥ 1

1, 2, 3 ≥ 0 1, 2, 3 ≥ 0

Solve the primal and obtain the solutions for the dual.

Example

Initial tableau for the primal problem (where 4, 5, 6 are slack variables):

1 2 3 4 5 6 RHS

4 8 4 2 1 0 0 1

5 2 8 4 0 1 0 1

6 1 2 8 0 0 1 1

−1 −1 −1 0 0 0 0

1 2 3 4 5 6 RHS

1 1 1/2 1/4 1/8 0 0 1/8

5 0 7 7/2 −1/4 1 0 3/4

6 0 3/2 31/4 −1/8 0 1 7/8

0 −1/2 −3/4 1/8 0 0 1/8

1 2 3 4 5 6 RHS

1 1 14/31 0 4/31 0 −1/31 3/31

5 0 196/31 0 −6/31 1 −14/31 11/31

3 0 6/31 1 −1/62 0 4/31 7/62

0 −11/31 0 7/62 0 3/31 13/62

1 2 3 4 5 6 RHS

1 1 0 0 1/7 −1/14 0 1/14

2 0 1 0 −3/98 31/196 −1/14 11/196

3 0 0 1 −1/98 −3/98 1/7 5/49

0 0 0 5/49 11/196 1/14 45/196

134

From the final tableau we have

• (1, 2, 3) = (1/14, 11/196, 5/49) is an optimal solution to the primal problem;

• (1, 2, 3) = (5/49, 11/196, 1/14) is an optimal solution to the dual problem; and

• the optimal objective values for both problems is 45/196.

135

An implementation:

import gurobipy as gp

from gurobipy import GRB

#### PRIMAL

p = gp.Model("primal")

I = [1,2,3]

x = p.addVars(I,name="x")

p.addConstr( 8*x[1] + 4*x[2] + 2*x[3] <= 1)

p.addConstr( 2*x[1] + 8*x[2] + 4*x[3] <= 1)

p.addConstr( 1*x[1] + 2*x[2] + 8*x[3] <= 1)

p.setObjective( x[1] + x[2] + x[3], GRB.MAXIMIZE )

p.optimize()

print(’SOLUTION:’)

print(’ Obj Val: ’, p.objVal)

for v in p.getVars():

if v.x > 1e-6:

print(" ", v.varName, v.x)

duals = p.getAttr("Pi", p.getConstrs())

print("Dual solution for primal:", duals)

#### DUAL

d = gp.Model("dual")

I = [1,2,3]

y = d.addVars(I,name="y")

d.addConstr( 8*y[1] + 2*y[2] + 1*y[3] >= 1)

d.addConstr( 4*y[1] + 8*y[2] + 2*y[3] >= 1)

d.addConstr( 2*y[1] + 4*y[2] + 8*y[3] >= 1)

d.setObjective( y[1] + y[2] + y[3], GRB.MINIMIZE )

d.optimize()

print(’SOLUTION:’)

print(’ Profit: ’, d.objVal)

for v in d.getVars():

if v.x > 1e-6:

print(" ", v.varName, v.x)

duals = p.getAttr("Pi", d.getConstrs())

print("Dual solution for dual:", duals)

print(’Surplus variables for dual’)

print("cons 1:", 8*y[1].x + 2*y[2].x + 1*y[3].x - 1)

print("cons 2:", 4*y[1].x + 8*y[2].x + 2*y[3].x - 1)

print("cons 3:", 2*y[1].x + 4*y[2].x + 8*y[3].x - 1)

136

Economical interpretation of the dual variables:

Consider the following primal/dual pair:

= min

s.t.

≥ ,

≥ 0.

= max

s.t.

≥ ,

≥ 0.

Given a basis for the primal, we saw earlier that the objective function of the primal can be written as:

=

−1 + ( − −1)

Assume this is an optimal basis and consider a slight change in one of the coefficients of , . The change is small enough

such that the current basis remains optimal. The change in the objective function with respect to the change in can be

written as:

∗

=

−1 + ( − −1)

Let −1 be the

ℎ column of −1:

∗

=

−1

=

∗

Therefore, the rate of change in the optimal objective function for a small change in is given by ∗ .

137

Obtain the dual variables for the problem. Find the change in the objective function for different values of the first

resource.

Dovetail/Salmonose example continued

import gurobipy as gp

from gurobipy import GRB

#Dovetail primal

m = gp.Model("dovetail")

c = [3,2]

A = [[1, 1], [3,1], [1,0], [0,1]]

b = [9,18,7,6]

N = range(len(c))

M = range(len(b))

x = m.addVars(N, name=’x’)

m.setObjective( gp.quicksum(x[i]*c[i] for i in N), GRB.MAXIMIZE)

cons = m.addConstrs( gp.quicksum(A[j][i]*x[i] for i in N) <= b[j] for j in M )

m.optimize()

# Display primal optimal values of decision variables

prim = m.getAttr("x", m.getVars())

print("Primal solution: ", prim)

# Display primal optimal values of decision variables

duals = m.getAttr("Pi", m.getConstrs())

print("Dual solution:", duals)

# Display optimal solution value

print(’Total profit: ’, m.objVal)

138

Optimal solution:

Objective: 22.5

Primal solution

x[0] = 4.5

x[1] = 4.5

Dual solution

u[0] = 1.5

u[1] = 0.5

u[2] = 0.0

u[3] = 0.0

The pictures below show the feasible space of the primal on variables 1, 2 and the projection of the feasible space of the

dual on variables 1, 2 for 3, 4 fixed.

1(2 = 0)

2(1 = 0)

6 = 0

5 = 0

3 = 0

4 = 0

• (4.5,4.5)

(3, 2)

1(2 = 0)

2(1 = 0)

3 = 0

4 = 0

(9, 18)

•

(1.5,0.5)

139

Changing the right hand side to 9.5 (i.e., increasing .5 unit of the first resource 1):

#Slightly changing b[0] from 9 to 9.5:

cons[0].rhs = 9.5

m.optimize()

# Display primal optimal values of decision variables

prim = m.getAttr("x", m.getVars())

print("Primal solution: ", prim)

# Display primal optimal values of decision variables

duals = m.getAttr("Pi", m.getConstrs())

print("Dual solution:", duals)

# Display optimal solution value

print(’Total profit: ’, m.objVal)

Objective: 23.25

Primal solution

x[0] = 4.25

x[1] = 5.25

Dual solution

u[0] = 1.5

u[1] = 0.5

u[2] = 0.0

u[3] = 0.0

Note that the active constraints remains the same. Therefore, the objective function, as expected increases by Δ1 × 1 =

0.5 ∗ 1.5 = 0.75.

1(2 = 0)

2(1 = 0)

6 = 0

5 = 0

3 = 0

4 = 0

• (4.25 , 5.25)

(3, 2)

1(2 = 0)

2(1 = 0)

3 = 0

4 = 0

(9.5, 18)

•

(1.5,0.5)

This interpretation is valid only for small changes in (the basis needs to remain the same). If we increase the right hand

side to a value such that the basis changes, we can not use the dual variables to obtain the change. For example, increasing

1 to 12:

# changing b[0] to 12:

cons[0].rhs = 12

140

m.optimize()

# Display primal optimal values of decision variables

prim = m.getAttr("x", m.getVars())

print("Primal solution: ", prim)

# Display primal optimal values of decision variables

duals = m.getAttr("Pi", m.getConstrs())

print("Dual solution:", duals)

# Display optimal solution value

print(’Total profit: ’, m.objVal)

Objective: 24.0

Primal solution

x[1] = 4.0

x[2] = 6.0

Dual solution

u[1] = 0.0

u[2] = 1.0

u[3] = 0.0

u[4] = 1.0

Note that the increase in the objective functionwith respect to the original problem is 24−22.5 = 1.5 ≠ Δ1×1 = 3×1.5 = 4.5

Primal and dual feasible spaces:

1(2 = 0)

2(1 = 0)

6 = 0

5 = 0

3 = 0

4 = 0

•

(4,6)

(3, 2)

1(2 = 0)

2(1 = 0)

3 = 0

4 = 1

(12, 18)

•(0,1)

141

Complementary slackness

We consider again our primal dual pair:

Primal LP

max =

≤

≥ 0

Dual LP

min =

≥

≥ 0

= ( )×; =

1

...

; =

1

...

; =

1

...

; =

1

...

;

Let:

• a denote the th row of , = 1, . . . , ,

• a denote the th column of , = 1, . . . , :

The relation between the primal-dual pair constraints and variables cam be written as:

• the th dual variable corresponds to the th functional constraint a ≤ of the primal problem; and

• the th primal variable corresponds to the th functional constraint a ≥ of the dual problem.

142

Let be a feasible solution to the primal problem and a feasible solution to the dual problem. Then is optimal to

the primal and is optimal to the dual if and only if

( − a) = 0, = 1, . . . ,

(a − ) = 0, = 1, . . . , ,

that is,

©« −

∑︁

=1

ª®¬ = 0, = 1, . . . , (

∑︁

=1

−

)

= 0, = 1, . . . ,

The Complementary Slackness Conditions

The Complementary Slackness Conditions state that

• if a primal constraint is non-binding, then the corresponding dual variable is zero (that is, if a dual variable is non-zero,

then the corresponding primal constraint is binding), and

• if a dual constraint is non-binding, then the corresponding primal variable is zero (that is, if a primal variable is

non-zero, then the corresponding dual constraint is binding).

In other words,

• either a primal variable is zero or the corresponding dual constraint is satisfied with equality, and

• either a primal constraint is satisfied with equality or the corresponding dual variable is zero.

The Complementary Slackness Conditions constitute one of the most important results in linear programming.

143

Proof: Introduce the slack variables 1, 2, . . . for the primal problem and the surplus variables 1, 2, . . . for the dual

problem.

max

,

= 11 + ... + + 01 + 02 + ... + 0

111 + 122 + · · · + 1 + 1 = 1

211 + 222 + · · · + 2 + 2 = 2

...

11 + 22 + · · · + + =

1, 2, ..., , 1, 2, ..., ≥ 0.

Note that

= − a, = 1, . . . , .

min

,

= 11 + ... + + 01 + ... + 0

111 + 212 + · · · + 1 − 1 = 1

121 + 222 + · · · + 2 − 2 = 2

...

11 + 22 + · · · + − =

1, 2, ..., , 1, 2, ..., ≥ 0.

Note that

=

a − , = 1, . . . , .

144

The th constraint of the primal LP is

∑︁

=1

+ = .

Multiplying this by , and summing for = 1 to , we have

∑︁

=1

©«

∑︁

=1

+ ª®¬ =

∑︁

=1

∑︁

=1

(

∑︁

=1

)

+

∑︁

=1

=

∑︁

=1

Now subtract

∑

=1 from both sides to get

∑︁

=1

(

∑︁

=1

−

)

+

∑︁

=1

=

∑︁

=1

−

∑︁

=1

Since = a − = ∑=1 − , this can be rewritten as

∑︁

=1

+

∑︁

=1

=

∑︁

=1

−

∑︁

=1

By the Strong Duality Theorem, if and are optimal solutions to the primal and dual respectively, then

∑︁

=1

=

∑︁

=1

and so the above equation becomes

∑︁

=1

+

∑︁

=1

= 0

145

Since all terms are nonnegative, this implies that each term is zero, that is,

( − a) = = 0, = 1, . . . ,

(a − ) = = 0, = 1, . . . ,

On the other hand, if = 0 for all and = 0 for all , then

∑︁

=1

=

∑︁

=1

and so and are optimal by weak duality.

Consider the following final tableau for a linear programming problem:

BV 1 2 3 1 2 RHS

1 0 −2/5 −12/5 1 −8/5 4

1 1 4/5 4/5 0 1/5 12

0 4 14 0 16 960

Example

From the tableau, (1, 2, 3) = (12, 0, 0) and (1, 2) = (0, 16) are optimal solutions to the primal and dual problems

respectively. We also have (1, 2) = (4, 0) and (1, 2, 3) = (0, 4, 14).

It is easy to verify that = 0 for = 1, 2 and = 0 for = 1, 2, 3.

That is, if ≠ 0 then = 0, and if ≠ 0 then = 0. If ≠ 0 then = 0, and if ≠ 0 then = 0.

146

The Complementary Slackness Theorem has a number of applications. It can be used to

• calculate the solution of the dual (respectively primal) when the solution of the primal (respectively dual) is known,

• verify whether a proposed solution of either the primal or dual is optimal, and

• for certain structured problems it can be used to design an algorithm to solve the problem.

We will give an example for each of the first two applications. For the third application, if you choose to do the MSc subject

“Network Optimisation" (which is offered in odd years), you will see beautiful applications of the complementary slackness

conditions in algorithm design.

147

The linear program

max = 41 + 2

1 − 2 ≤ 1

51 + 2 ≤ 55

−1 + 22 ≤ 3

1, 2 ≥ 0,

has an optimal solution ∗1 = 5,

∗

2 = 4. What is the optimal solution of the dual?

Example

The dual program is

min = 1 + 552 + 33

1 + 52 − 3 ≥ 4

−1 + 2 + 23 ≥ 1

1, 2, 3 ≥ 0.

Writing the complementary slackness conditions, we obtain:

1(1 − 1 + 2) = 0

2(55 − 51 − 2) = 0

3(3 + 1 − 22) = 0

1(4 − 1 − 52 + 3) = 0

2(1 + 1 − 2 − 23) = 0

Notice that these comprise five non-linear equations for the five variables 1, 2, 1, 2, 3.

Substituting ∗1 = 5,

∗

2 = 4 into the complementary slackness conditions, we get

0∗1 = 0

26∗2 = 0

0∗3 = 0

5(4 − ∗1 − 5∗2 + ∗3) = 0

4(1 + ∗1 − ∗2 − 2∗3) = 0

The second equation gives ∗2 = 0 and then the fourth and fifth equations give

∗

1 = 9,

∗

3 = 5. Check that this is feasible for

the dual.

The solution to the dual is thus (∗1, ∗2, ∗3) = (9, 0, 5) at which point ∗ = 24. Check that this is also equal to ∗.

148

For the linear program

max = 81 − 92 + 123 + 44 + 115

21 − 32 + 43 + 4 + 35 ≤ 1

1 + 72 + 33 − 24 + 5 ≤ 1

51 + 42 − 63 + 24 + 35 ≤ 22

1, . . . , 5 ≥ 0,

it has been proposed that the optimal solution is ∗1 = 0,

∗

2 = 2,

∗

3 = 0,

∗

4 = 7,

∗

5 = 0. Is this correct?

Example

The dual linear program is

min = 1 + 2 + 223

21 + 2 + 53 ≥ 8

−31 + 72 + 43 ≥ −9

41 + 32 − 63 ≥ 12

1 − 22 + 23 ≥ 4

31 + 2 + 33 ≥ 11

1, 2, 3 ≥ 0.

The primal variables ∗2 and

∗

4 are positive. So the complementary slackness conditions give

−9 + 3∗1 − 7∗2 − 4∗3 = 0

4 − ∗1 + 2∗2 − 2∗3 = 0.

Also, since the second primal constraint is non-binding, we have

∗2 = 0.

Solving, we see that ∗1 = 34/10 and ∗3 = 3/10.

However, the solution (∗1, ∗2, ∗3) = (34/10, 0, 3/10) is not dual feasible as

4∗1 + 3∗2 − 6∗3 = 118/10 12.

Therefore the postulated solution is not optimal for the primal.

149

In this chapter, we have developed the Weak Duality, Strong Duality and Complementary Slackness Conditions for primal-

dual pairs in standard form.

These results are still valid for LP problems in non-standard form (i.e. possibly with = constraints and variables which are

unrestricted in sign).

The statements of these results under the general setting are exactly the same as what we saw for standard forms. For example,

for the complementary slackness conditions, we can write:

Consider

Primal

max

=

a ≤ , ∈

a = , ∈

≥ 0, ∈

∈ R , ∈

Dual

min

=

a ≥ , ∈

a = , ∈

≥ 0, ∈

∈ R , ∈

Let be a feasible solution to the primal problem and a feasible solution to the dual problem. Then is optimal to the

primal and is optimal to the dual if and only if

( − a) = 0, for all

(a − ) = 0, for all

150 151

Chapter 8

Sensitivity analysis

1(2 = 0)

2(1 = 0)

1 = 91 = 8

152

Uncertainty

Until now, we assumed we knew all parameters with certainty. In practice, however, this is certainly (pun intended) not true.

In the last chapter, we saw that dual variables can be used to evaluate the (local) rate of change in the objective function for

variations in the resource parameters.

In this chapter, we will examine what are the changes on the parameters (parametric changes) for the basis indices to remain

the same in the optimal solution.

Parametric changes

How is the solution affected by such perturbations? For example:

• How much can I change the RHS before the basis changes?

• How much can I change the cost coefficients before the basis changes?

We will discuss what to do if there are post-hoc changes in or .

There are two important aspects of the new solution that we need to check:

• feasibility, and

• optimality.

153

Changes in the resource vector

We saw before that for an optimal basis , the value of the basic variables is given by −1 and can be read directly from the

optimal tableau:

RHS

−1 −1

− + −1 0 −1

What changes occur in the tableau if the original resource vector is changed to ?

RHS

−1 −1

− + −1 0 −1

Question: When is this new tableau feasible?

The only change in the tableau was the change in the RHS from −1 to −1 and in the objective function from

−1

to

−1.

The new tableau is feasible if −1 ≥ 0. Note that since the reduced costs remain the same (with respect to the change in

the resource vector), if the tableau remains feasible, it also remains optimal and the new objective function value is given by

−1.

154

Consider the problem:

= max 31 + 22

s.t.

1 + 2 ≤ 9,

31 + 2 ≤ 18,

1 ≤ 7,

2 ≤ 6,

1, 2 ≥ 0.

represented as:

= max 31 + 22

s.t.

1 + 2 + 3 = 9,

31 + 2 + 4 = 18,

1 + 5 = 7,

2 + 6 = 6,

1, 2, 3, 4, 5, 6 ≥ 0.

What happens to the optimal solution if the actual value of the first resource is 8 instead of 9.

Example

We saw that the current optimal solution was given by:

= [2, 1, 5, 6] , = [4, 3] .

=

1 1 0 0

1 3 0 0

0 1 1 0

1 0 0 1

, =

0 1

1 0

0 0

0 0

, =

2

3

0

0

, =

[

0

0

]

and

=

[ −0.5 −1.5 ]

155

Changing the resource vector from =

9

18

7

6

to

=

8

18

7

6

, the current basis has the following values:

= −1 =

3

5

2

3

≥ 0 and the solution remains feasible.

As the reduced costs do not change with a change in the resource vector, the basis remains the same. Therefore the solution

remains optimal.

1(2 = 0)

2(1 = 0)

6 = 0

5 = 0

1 = 91 = 8

3 = 0 4 = 0

Figure 8.1: Pictorial representation of the change in the first parameter. Note that the indices of the basic variables remain

the same.

156

Consider the problem:

= max 31 + 22

s.t.

1 + 2 ≤ 9,

31 + 2 ≤ 18,

1 ≤ 7,

2 ≤ 6,

1, 2 ≥ 0.

represented as:

= max 31 + 22

s.t.

1 + 2 + 3 = 9,

31 + 2 + 4 = 18,

1 + 5 = 7,

2 + 6 = 6,

1, 2, 3, 4, 5, 6 ≥ 0.

What happens to the optimal solution if the actual value of the first resource is 12 instead of 9.

Example

As before, we start with the current optimal basis:

= [2, 1, 5, 6] , = [4, 3] .

=

1 1 0 0

1 3 0 0

0 1 1 0

1 0 0 1

, =

0 1

1 0

0 0

0 0

, =

2

3

0

0

, =

[

0

0

]

and

=

[ −0.5 −1.5 ]

157

and change the resource vector, in this case from =

9

18

7

6

to

=

11

18

7

6

, the current basis has now the following values:

= −1 =

7.5

3.5

3.5

−1.5

≥ 0 and the basic solution given by indices {2, 1, 5, 6} is no longer feasible as 6 is negative. Note

that the optimal solution is no longer in the intersection of lines 3 = 0, 4 = 0 but in the intersection of lines 4 = 0, 6 = 0.

1(2 = 0)

2(1 = 0)

6 = 0

5 = 0

1 = 9 1 = 11

3 = 0

4 = 0

Figure 8.2: Pictorial representation of the change in the first parameter. Note that the indices of the basic variables remain

the same.

158

Restoring optimality

If the new basis is infeasible, we can try to rebuild a feasible solution by doing simplex operations. In order to do so, we

introduce artificial variables to obtain a canonical tableau.

Assume that after changing the resource vector, we obtain the following infeasible tableau:

BV 1 2 3 4 5 RHS

2 0 1 −1 2 0 20

5 0 0 −1 1 1 -5

1 1 0 1 −1 0 10

0 0 1 2 0 100

restore optimality using simplex operations.

Example

We introduce a new artificial variable 1. We then append a column for 1 to the Simplex tableau and initialize Phase 1 of

the Two-Phase Simplex method (i.e., we maximise = −1). Observe that 1 replaces 5 as a basic variable. Note that the

system is not yet in canonical form.

BV 1 1 2 3 4 5 RHS

2 0 0 1 −1 2 0 20

1 1 0 0 1 −1 −1 5

1 0 1 0 1 −1 0 10

1 0 0 0 0 0 0

We convert to canonical form by subtracting the 1 row from the row.

BV 1 1 2 3 4 5 RHS

2 0 0 1 −1 2 0 20

1 1 0 0 1 −1 −1 5

1 0 1 0 1 −1 0 10

0 0 0 −1 1 1 -5

One pivot operation gives the final Phase-1 tableau:

BV 1 1 2 3 4 5 RHS

2 1 0 1 0 1 −1 25

3 1 0 0 1 −1 −1 5

1 −1 1 0 0 0 1 5

1 0 0 0 0 0 0

We initialise Phase 2 with basic variables 1, 2, 3 and with the -coefficients from the first Simplex tableau.

159

BV 1 2 3 4 5 RHS

2 0 1 0 1 −1 25

3 0 0 1 −1 −1 5

1 1 0 0 0 1 5

0 0 1 2 0 100

Restoring canonical form we get:

BV 1 2 3 4 5 RHS

2 0 1 0 1 −1 25

3 0 0 1 −1 −1 5

1 1 0 0 0 1 5

0 0 0 3 1 95

Which satisfies the Optimality criterion. The new optimal solution is ∗ = (5, 25, 5, 0, 0) with ∗ = 95. Observe that the

optimal value of decreased in total by 5 as a result of the change in , and that 3 replaced 5 as an optimal basic variable.

160

Range of for which the basis remains optimal

A question that might be asked is for which resource vectors does the current basis with associated matrix remains the

same. Clearly the answer is for such that the RHS remains feasible. That is:

−1 ≥ 0

Change in a single value of

In the following, we obtain explicit expressions for changes in a single value of the resource vector. Let be equal to the

old one except that the new value of is equal to + :

(new) = +

where is the th column of the identity matrix.

The new final RHS value is given by

ˆ(new) = −1(new) = −1( + )

This yields

ˆ(new) = −1 + −1

= −1 + (−1).

where (−1). denotes the th column of −1.

Since ˆ(old) = −1, we have

ˆ(new) = ˆ(old) + (−1). .

The old basis remains optimal after the change if and only if ˆ(new) ≥ 0, which occurs if and only if

(−1). ≥ −ˆ(old)

or equivalently

(−1), ≥ −ˆ(old) , = 1, 2, . . . ,

where (−1), denotes the entry of −1 in the th row and th column, and ˆ(old) is the th component of ˆ(old).

So the old basis remains optimal if and only if

≥ −ˆ

(old)

(−1),

for all such that (−1), > 0, and

≤ −ˆ

(old)

(−1),

for all such that (−1), < 0. We can write this in as follows:

161

Let

= { | ∈ {1, ..., }, (−1), > 0}

and let

= { | ∈ {1, ..., }, (−1), < 0}.

Then the old basis remains optimal if and only if

max

∈

−ˆ(old)

(−1), ≤ ≤ min∈

−ˆ(old)

(−1),

162

For

=

48

20

8

−1 =

2 4 −16

0 4 −8

0 −1 3

,

how much can we change the second component of without changing the optimal solution?

Example

ˆ(old) = −1 =

2 4 −16

0 4 −8

0 −1 3

48

20

8

=

48

16

4

(−1).2 =

4

4

−1

Since (−1)1,2 = 4 > 0, (−1)2,2 = 4 > 0, (−1)3,2 = −1 < 0,

max

∈2

−ˆ(old)

(−1),2 = max=1,2

−ˆ(old)

(−1),2 = max

{−48

4

,

−16

4

}

= −4.

min

∈2

−ˆ(old)

(−1),2 =

−ˆ(old)3

(−1)3,2 =

−4

−1 = 4

Thus

4 ≥ ≥ −4.

Therefore the old basis (whatever it is) will remain optimal if and only if the value of 2 is in the interval [20 − 4, 20 + 4] =

[16, 24].

163

Direct method

To determine the critical values of , we simply solve the inequalities

ˆ(new) = −1(new) = ˆ(old) + (−1). ≥ 0.

Example revisited:

(old) = =

48

20

8

, (new) =

48

20 +

8

ˆ(new) = −1(new)

=

2 4 −16

0 4 −8

0 −1 3

48

20 +

8

=

48 + 4

16 + 4

4 −

Obs: Since a single component changed, we can use just the affected column of −1.

ˆ(new) = ˆ(old) + (−1).2 =

48

16

4

+

4

4

−1

=

48 + 4

16 + 4

4 −

Thus the non-negativity criterion for the basic variables to remain the same is

48 + 4 ≥ 0 hence ≥ −12

16 + 4 ≥ 0 hence ≥ −4

4 − ≥ 0 hence ≤ 4

which is equivalent to

−4 ≤ ≤ 4,

and so

16 ≤ 2 ≤ 24.

Conclusion: The old basis remains optimal if and only if 2 is in the interval [16, 24].

164

Changes in the resource vector

As before that for an optimal basis , the optimal tableau reads:

RHS

−1 −1

− + −1 0 −1

What changes occur in the tableau if the original rersource vector is changed to ?

RHS

−1 −1

−( ) + ( )−1 0 −1

Question: When is this new tableau feasible?

Question: When is this new tableau optimal?

If all values in the -row remain negative (positive) the maximisation (minimisation) problem remains optimal.

165

Consider the problem:

= max 31 + 22

s.t.

1 + 2 ≤ 9,

31 + 2 ≤ 18,

1 ≤ 7,

2 ≤ 6,

1, 2 ≥ 0.

represented as:

= max 31 + 22

s.t.

1 + 2 + 3 = 9,

31 + 2 + 4 = 18,

1 + 5 = 7,

2 + 6 = 6,

1, 2, 3, 4, 5, 6 ≥ 0.

What happens to the optimal solution if the actual value of the first profit value is 7 instead of 3.

Example

We saw that the current optimal solution was given by:

= [2, 1, 5, 6] , = [4, 3] .

=

1 1 0 0

1 3 0 0

0 1 1 0

1 0 0 1

, =

0 1

1 0

0 0

0 0

, =

2

3

0

0

, =

[

0

0

]

and

=

[ −0.5 −1.5 ]

166

Remember = [2, 1, 5, 6] = −1 = [4.5, 4.5, 2.5, 1.5]

Changing the cost vector from =

[

3, 2, 0, 0, 0, 0

]

to =

[

7, 2, 0, 0, 0, 0

]

and remembering the basic and non-basic indices

are = [2, 1, 5, 6] , = [4, 3] , we have =

[

2, 7, 0, 0

]

and =

[

0, 0

]

, we can recompute the reduced costs of

the non-basic variables as:

=

− −1 =

[ −2.5 0.5 ]

As the reduced costs of the second non-basic variable (3) is now positive, we need to enter it in the basis to regain optimality.

To find the leaving variable, we compute:

= −13 =

1.5

−0.5

0.5

−1.5

and the leaving variable is given by the ration test:

= argmin

| ∈ , >0

{

}

= argmin

∈{2,5}

{

4.5

1.5

,

2.5

0.5

}

which indicates that 2 leaves the basis.

The new basis is:

= [3, 1, 5, 6] , = [4, 2] .

=

1 1 0 0

0 3 0 0

0 1 1 0

0 0 0 1

, =

0 1

1 1

0 0

0 1

, =

0

7

0

0

, =

[

0

2

]

and

=

[ −1.66 −3.66 ]

167

which indicates that the solution is optimal, given by:

= [3, 1, 5, 6] = [3, 6, 1, 6] , = [4, 2] = [0, 0] .

1(2 = 0)

2(1 = 0)

6 = 0

5 = 0

3 = 0

4 = 0

old objective

new objective

Figure 8.3: Pictorial representation of the change in the first cost parameter.

168

Range of for which the basis remains optimal

In the following, we obtain explicit expressions for changes in a single value of the cost vector. Let be equal to the old

one except that the new value of is equal to + :

(new) = +

where is the th column of the identity matrix.

We divide the discussion in two cases.

Variable is not in the basis

If the original cost coefficient of a variable that is not in the old optimal basis changes from to + , then

(−(new) ) = (−1) − ( + )

= (−(old) ) − .

Thus, for a maximisation problem, the basis (and indeed the optimal solution) remains the same provided that

(−(old) ) − ≥ 0,

which is the same as saying that

≤ (−(old) )

169

Suppose that for an optimal solution of a linear program, variable 1 is non basic and it’s reduced cost is given by and

−2. How much can we change the value of 1 without changing the basis?

Example

First we observe that our problem is a maximisation problem (why?).

Here

(−(old) )1 = 2

and so

≤ (−(old) )1

is the same as

≤ 2.

This means that as long as the new 1 is less than or equal to the old 1 plus 2, i.e. (new)1 ∈ (−∞, (old)1 + 2], the optimal

solution will not be affected.

Note that we do not need to know the old value of 1 to reach this conclusion.

Variable is in the basis

If the original cost coefficient of a variable that is in the old optimal basis changes from to + , then

−(new) = ( + )−1 −

= (−1 − ) + −1

= −(old) + −1

= −(old) + (−1)·

where corresponds to the th row in the final tableau, is the th row of the identity matrix, and (−1)· is the th

row of −1 . (This is because is unchanged and is a component of .)

Therefore, if we have a maximisation problem, then the optimal solution remains optimal if and only if

−((old) ) + (−1), ≥ 0, for all non-basic .

Equivalently,

≥ (

(old)

)

(−1), , for all non-basic with (

−1), > 0

and

≤ (

(old)

)

(−1), , for all non-basic with (

−1), < 0

170

Thus the optimal solution remains optimal if and only if

max

((old) )

(−1), ≤ ≤ min

((old) )

(−1), ,

where max is taken over all such that (−1), > 0 and min is over all such that (−1), < 0.

171

A direct analysis using the simplex tableau

The formula above determines the range that can fall in without affecting the optimal solution. We can also observe that

adding to one coefficient in the initial simplex tableau will result in adding in the final tableau, as only addition operations

are done in the -row. We can, therefore, just add the change to the final tableau and restore the canonical form.

Consider the optimal tableau:

BV 1 2 3 4 5 6 RHS

5 – – – 0 1 0 –

6 – – – 0 0 1 –

4 3 −4 0 1 0 0 –

2 3 4 0 0 0 –

What is the effect of adding to 4?

Example

If we add to the old 4, we would have instead

BV 1 2 3 4 5 6 RHS

5 – – – 0 1 0 –

6 – – – 0 0 1 –

4 3 −4 0 1 0 0 –

2 3 4 − 0 0 –

Restoring the canonical form of the 4 column, we obtain

BV 1 2 3 4 5 6 RHS

5 – – – 0 1 0 –

6 – – – 0 0 1 –

4 3 −4 0 1 0 0 –

2 + 3 3 − 4 4 0 0 0 –

Thus, as we have a maximisation problem (why?), to ensure that the current basis remains optimal, we need

2 + 3 ≥ 0

and

3 − 4 ≥ 0

so that

−2/3 ≤ ≤ 3/4.

172

Therefore, the old optimal solution remains optimal if we keep the change in 4 in the interval [−2/3, 3/4], i.e. (new)4 ∈

[4 − 2/3, 4 + 3/4] (where 4 means (old)4 ).

From the tableau above we can see that if < −2/3 then 1 will enter the basis, and if > 3/4 then 2 will enter the basis.

173

An example of parametric programming

So far we have considered only the impact of discrete changes in the cost coefficients or the RHS values. Often one needs to

understand the impact of continuous systematic changes in these values to the optimal solution. The study of such problems

is usually called parametric (linear) programming.

We will not discuss parametric programming in detail. Instead we will use an example to illustrate the main idea for

parametric programs where the cost coefficients contain parameters.

It is known that

max = 1 + 22

1 + 32 ≤ 8

1 + 2 ≤ 4

1, 2 ≥ 0

has the following optimal tableau (which can be obtained by using the Simplex Method):

BV 1 2 3 4 RHS

2 0 1 1/2 −1/2 2

1 1 0 −1/2 3/2 2

0 0 1/2 1/2 6

where 3 and 4 are slack variables.

Example

Solve the following LP problem, where is a parameter, 0 ≤ < ∞.

max () = (1 + )1 + (2 − )2

1 + 32 ≤ 8

1 + 2 ≤ 4

1, 2 ≥ 0

174

Observe that

• changes occurs in both cost coefficients,

• there is a parameter involved, and

• functional constraints are unchanged.

When = 0 the parametric problem is the same as the non-parametric problem. Since they have the same functional

constraints, the optimal tableau of the latter gives a basic feasible solution to the former. However, we need to update the

-row.

Since for the optimal basis we have

= (1 + )1 + (2 − )2

we have to add − to the − below colum 1 and to the z-row below column 21 This gives the following tableau.

BV 1 2 3 4 RHS

2 0 1 1/2 −1/2 2

1 1 0 −1/2 3/2 2

− 1/2 1/2 6

Restoring canonical form by using the elementary row operations ′3 = 3 − 1 + 2, we obtain:

BV 1 2 3 4 RHS

2 0 1 1/2 −1/2 2

1 1 0 −1/2 3/2 2

0 0 1/2 − 1/2 + 2 6

This tableau is optimal if and only if

1/2 − ≥ 0 and 1/2 + 2 ≥ 0.

Thus the tableau above is optimal if and only if

0 ≤ ≤ 1/2,

and in this case

∗() = (2, 2), ∗() = 6.

Since the upper bound 1/2 of corresponds to 3, 3 should be the entering variable when > 1/2. The ratio test shows

that we should pivot on the ( = 1, = 3)-entry and take 2 out of the basis. Pivoting on this entry we obtain:

BV 1 2 3 4 RHS

3 0 2 1 −1 4

1 1 1 0 1 4

0 −1 + 2 0 1 + 4 + 4

This tableau is optimal if and only if

−1 + 2 ≥ 0 and 1 + ≥ 0,

that is,

1/2 ≤ .

In this case we have

∗() = (4, 0), ∗() = 4 + 4.

1Note that we initialised the z-row with the negative of the cost coefficients. If we had initialised with the original coefficients, we would add −

and , respectively

175

Summary: When 0 ≤ ≤ 1/2, the optimal solution is

∗() = (2, 2)

and the optimal value is

∗() = 6.

When 1/2 ≤ < ∞, the optimal solution is

∗() = (4, 0)

and the optimal value is

∗() = 4 + 4.

176

Bibliography

[1] Jorge Luis Borges et al. Of exactitude in science. QUADERNS-BARCELONA-COLLEGI D ARQUITECTES DE

CATALUNYA-, pages 12–12, 2002.

[2] Hans Nielsen. Algorithms for linear optimization. Technical report, Technical University of Denmark, 1999.

[3] Gerard Sierksma and Yori Zwols. Linear and Integer Optimization: Theory and Practice, Third Edition. Chapman and

Hall/CRC, 3rd edition edition, 2015.

[4] Kjell B. Zandin and Harold B. Maynard. Maynard’s Industrial Engineering Handbook. McGraw-Hill Education, 5th

edition edition, June 2001.

177

欢迎咨询51作业君