程序代写案例-MAST20018

欢迎使用51辅导,51作业君孵化低价透明的学长辅导平台,服务保持优质,平均费用压低50%以上! 51fudao.top
MAST20018
Discrete Maths and Operations Research:
Linear Programming
1 / 423
Lecture Outline
Introduction to Linear Programming
The G
eometry of Linear Programming
Geometry of LP in Higher Dimensions
Basic Feasible Solutions
Fundamental Theorem of Linear Programming
Preview of the Simplex Method
The Simplex Method
Solution Possibilities
Non-Standard Formulations
Duality Theory
Sensitivity Analysis
2 / 423
§1 – Introduction to Linear Programming
Synopsis
• What is mathematical optimisation?
• What is linear programming?
• What do we mean by linear constraints and a linear objective
function?
3 / 423
§1.1 – Applications of Mathematical Optimisation
Examples of optimisation problems:
• Airline scheduling;
• Shipping;
• Public transportation;
• Supply chain and logistics;
• Defence;
• Mining;
• Electricity and water resources.
• Emergency and disaster management;
4 / 423
§1.2 – What is Mathematical Optimisation?
There are two aspects to optimisation: modelling and solving.
An optimisation model contains:
• Decision variables
• Objective function
• Constraints
5 / 423
§1.3 – Types of Optimisation Formulations
Optimisation models can be placed into distinct classes depending
on the best way to express the objective function or the
constraints.
For deterministic problems, we are most interested in determining
whether the objective function or constraints can be expressed
linearly or nonlinearly, and whether the variables themselves can be
continuous or integral.
These factors determine the class of algorithms available to solve
the problem.
6 / 423
Types of Optimisation Formulations
NLP: Nonlinear program LP: Linear Program ILP: Integer LP
max f(x) or min f(x) min cTx min cTx
gi(x) = 0 i = 1, . . . , k Ax = a Ax = a
hj(x) ≤ 0 j = 1, . . . ,m Bx ≤ b Bx ≤ b
x ≥ 0 x ≥ 0
x ∈ Rn x ∈ Zn
In LP and ILP, c ∈ Rn, and A and B are real matrices with n columns.
7 / 423
§1.4 – Solving Linear Programs
There are many linear programming solvers available both
commercially and free for academic use.
Examples of commercial solvers (also with free academic licences)
are
• CPLEX,
• Gurobi, and
• FICO Xpress
Common mathematical software, such as MATLAB, have packages
for solving linear programs.
Microsoft Excel Solver also solves linear (integer) and nonlinear
optimization problems.
8 / 423
Computational speed-up
Bixby1 tested the various versions of CPLEX on the same
computer. One of the data sets was a linear program with 410,640
constraints and 1,584,000 variables.
1988 CPLEX 1.0 29.8 days
1997 CPLEX 5.0 1.5 hours
2002 CPLEX 8.0 86.7 seconds
2003 CPLEX 9.0 59.1 seconds
According to Bixby, the algorithmic improvement (machine
independent) is 3,300 times and machine improvement is 1,600
times, giving a total improvement of 5.2 million times in a period
of 16 years (1988 to 2004)!!
1Bob Bixby, Operations Research, Jan 2002, pp. 1-13, updated in 2004.
9 / 423
§1.5 – Linear Programming
These are optimisation problems whose objective functions and
constraints are linear.
Definition 1.1
A real-valued function f , defined on Ω ⊆ Rn is a linear function if
and only if there exists a c ∈ Rn such that
f(x) = c1x1 + c2x2 + ...+ cnxn.
10 / 423
Definition 1.2
A linear equation is an expression of the form
c1x1 + c2x2 + ...+ cnxn = b
A linear inequality is an expression of the form
c1x1 + c2x2 + ...+ cnxn ≤ b
or
c1x1 + c2x2 + ...+ cnxn ≥ b.
11 / 423
An example of a linear optimisation problem with a linear objective
function and linear constraints is:
Example 1.1
z∗ := max
x
4x1 + 3x2
subject to
x1 + x2 ≤ 30
x1 ≥ 0
x2 ≥ 0.
12 / 423
§1.6 – The Diet Problem
A student wants to plan a nutritious diet, but they are on a limited
budget: they want to spend as little money as possible.
Their minimum nutritional daily requirements are as follows: 2000
calories, 55g protein, 800mg calcium. The student is considering
the following foods2:
Food Serving size Energy (Cal) Protein (g) Calcium (mg) Price per serving ($)
Oatmeal 28g 110 4 2 0.30
Chicken 100g 205 32 12 2.40
Eggs 2 large 160 13 54 1.30
Whole milk 250ml 160 8 285 0.90
Cherry pie 170g 420 4 22 0.20
Pork and beans 260g 260 14 80 1.90
2Taken from Bob Bixby’s notes from the Combinatorial Optimisation at
Work meeting
http://co-at-work.zib.de/berlin2009/downloads/2009-09-24/
13 / 423
The Diet Problem
We can represent the number of servings of each type of food in
the diet by the variables:
x1 the daily number of servings of Oatmeal
x2 the daily number of servings of Chicken
x3 the daily number of servings of Eggs
x4 the daily number of servings of Milk
x5 the daily number of servings of Cherry Pie
x6 the daily number of servings of Pork and Beans
14 / 423
The Diet Problem
and can express the problem as a linear program as follows:
Minimise 0.3x1 + 2.40x2 + 1.3x3 + 0.9x4 + 2.0x5 + 1.9x6,
subject to
110x1 + 205x2 + 160x3 + 160x4 + 420x5 + 260x6 ≥ 2000
4x1 + 32x2 + 13x3 + 8x4 + 4x5 + 14x6 ≥ 55
2x1 + 12x2 + 54x3 + 285x4 + 22x5 + 80x6 ≥ 800
x1, x2, x3, x4, x5, x6 ≥ 0.
Here the first three inequalities are functional constraints.
The constraints x1, x2, x3, x4, x5, x6 ≥ 0 are not functional
constraints.
15 / 423
§1.7 – Formulation of a Linear Program
We are moving towards a convenient form for expressing linear
programming problems. Let
• m denote the number of functional constraints
• n denote the number of decision variables
• bi denote the available level of resource i, i = 1, 2, 3, ...,m
• xj denote the level of activity j, j = 1, 2, 3, ..., n
• z denote the value of the objective function
• cj denote the return/cost per unit of activity j
• aij denote the amount of resource i consumed /produced by
each unit of activity j.
16 / 423
Assumptions of LP Problems
• Activity levels are continuous (divisibility). For activity j, the
activity level xj is a continuous variable.
(However, quite often a problem requires that the activity
levels be integers. In this case, a continuous formulation of
such a problem is only an approximation.)
• The contribution of the objective function (and the LHS of
each constraint) from decision variable xj is proportional to
the level of activity j.
• Independence of the effects of activities (additivity):
f(x1, ..., xn) = c1x1 + ...+ cjxj + ...+ cnxn,
ai,1x1 + ...+ ai,jxj + ...+ ai,nxn ≤ bi,
or ai,1x1 + ...+ ai,jxj + ...+ ai,nxn ≥ bi.
• The input parameters are known with certainty.
17 / 423
Steps to Take
The steps that need to be taken are:
1. Choose decision variables
These are the items about which we need to make decisions.
For linear programs they are continuous. For example, we may
need to determine the start times for several activities. We
could then let our decision variables be: xi := the start time
for activity i. You should be remember to include units in
your definition of the decision variables.
2. Write down the objective function
This is a linear expression that we wish to either maximise or
minimise. For example, we wish to minimise the makespan of
a project, or we wish to maximise profit.
3. Write down the constraints (and do not forget to write that
the variables are non-negative).
18 / 423
Steps to Take
Example 1.2
A steel company must decide how to allocate next week’s time on
a rolling mill, which is a machine that takes slabs of steel and can
produce either bands (at 200 tonnes/hour) or coils (at 140
tonnes/hour).
Bands and coils can be sold for $25/tonne and $30/tonne
respectively.
Based on currently booked orders, the company must produce at
least 6000 tonnes of bands and 4000 tonnes of coils.
Given that there are 40 hours of production time this week, how
many tonnes of bands and coils should be produced to yield the
greatest revenue?
19 / 423
Steps to Take
• Choose the decision variables:
◦ Let x1 be the amount of time (in hours) devoted to making
bands
◦ Let x2 be the amount of time (in hours) devoted to making
coils.
• Write down the objective function:
◦ This is max(25× 200)x1 + (30× 140)x2
• Write down the constraints:
◦ We need at least 6000 tonnes of bands, so 200x1 ≥ 6000 and
4000 tonnes of coils, so 140x2 ≥ 4000. We are limited to 40
hours of production time, so x1 + x2 ≤ 40. The decision
variables must be nonnegative, i.e. x1, x2 ≥ 0.
Question: Is this LP feasible?!
20 / 423
Example 1.3 (The (balanced) transportation problem)
This is a classical Operations Research problem with
• a supply of some commodity;
• destinations where the commodity is needed.
The total available supply equals total demand.
The formulation can be adapted for general allocation and
scheduling problems.
21 / 423
(The facts in this example have been amended to simplify the
problem.)
Adelaide has two water catchment storage facilities:
• Storage 1 can store up to 400 megalitres per day
• Storage 2 can store up to 500 megalitres per day
We assume that the storage facilities are topped up by a third dam
with infinite capacity – the storage facilities can therefore supply
any amount up to their maximum on demand.
22 / 423
Three secondary dams are supplied from these two facilities:
• Barossa (Dam 1) needs at least 300 megalitres per day
• Happy Valley (Dam 2) needs at least 200 megalitres per day
• Kangaroo Creek (Dam 3) needs at least 400 megalitres per
day
23 / 423
The distances between storage facilities and the secondary dams
are (in kilometres).
Dam 1 Dam 2 Dam 3
Storage Facility 1 75 40 85
Storage Facility 2 20 55 75
24 / 423
Task 1
Formulate a linear program that minimises the total transportation
distances to meet the demands of the secondary dams, such that
capacities of the storage facilities are not violated.
25 / 423
Analysis
• Decision Variables:
Let xij be the quantity of water delivered from catchment i to
location j, i = 1, 2; j = 1, 2, 3.
• Objective function:
f(x) := 75x11 + 40x12 + 85x13
+20x21 + 55x22 + 75x23.
26 / 423
• Constraints:
Production capacities:
x11 + x12 + x13 ≤ 400 (storage 1)
x21 + x22 + x23 ≤ 500 (storage 2).
27 / 423
• Location requirements:
x11 + x21 ≥ 300 (location 1)
x12 + x22 ≥ 200 (location 2)
x13 + x23 ≥ 400 (location 3).
• Non-negativity:
x11, x12, x13, x21, x22, x23 ≥ 0.
28 / 423
Complete Formulation
z∗ := min z
= 75x11 + 40x12 + 85x13 + 20x21 + 55x22 + 75x23,
such that
x11 + x12 + x13 ≤ 400
x21 + x22 + x23 ≤ 500
x11 + x21 ≥ 300
x12 + x22 ≥ 200
x13 + x23 ≥ 400
x11, x12, x13, x21, x22, x23 ≥ 0.
29 / 423
§2 – The Geometry of Linear Programming
Synopsis
• What is a feasible region?
• How do we obtain the optimal solution via the graphical
method?
• Are there pathological cases?
30 / 423
§2.1 – The Feasible Region
There are strong relationships between the geometric and algebraic
features of LP problems.
We shall first examine this aspect in two dimensions (n = 2) and
then consider higher dimensions. This latter step will involve an
understanding of the relationship between geometric and algebraic
aspects of linear programming problems.
31 / 423
Example 2.1
Consider the problem
z∗ := max z = 4x1 + 3x2
subject to
2x1 + x2 ≤ 40 (production machine time)
x1 + x2 ≤ 30 (production packaging time)
x1 ≤ 15 (market demand)
x1 ≥ 0
x2 ≥ 0.
32 / 423
• First constraint:
2x1 + x2 ≤ 40
• Corresponding hyperplane:
2x1 + x2 = 40
• Corresponding halfspace:
22x + x = 401
33 / 423
• Second constraint:
x1 + x2 ≤ 30
• Corresponding hyperplane:
x1 + x2 = 30
• Corresponding halfspace:
x + x = 301 2
34 / 423
• Third constraint:
x1 ≤ 15
• Corresponding hyperplane:
x1 = 15
• Corresponding halfspace:
x = 151
35 / 423
Taking note of the nonnegativity constraints, the overall feasible
region is therefore:
feasible region
36 / 423
Definition 2.1
For an LP problem with n variables, a vector x ∈ Rn is called a
feasible solution if it satisfies all constraints of the problem.
Definition 2.2
The set of all feasible solutions of an LP problem is called the
feasible region.
37 / 423
§2.2 – The Graphical Method
We can use a graphical method to solve LP problems in two
dimensions.
• The objective function is
f(x) = z = 4x1 + 3x2
Hence
x2 = (z − 4x1)/3
so that, for a given value of z, the level curve is a straight
line with slope −4/3. We can plot it for various values of z.
We can identify the (x1, x2) pair that yields the largest
feasible value for z.
38 / 423
39 / 423
Important Observations
• The graphical method can be used to identify the hyperplanes
specifying the optimal solution.
• The optimal solution itself is determined by solving the
respective equations.
• Don’t be tempted to “read” the optimal solution directly from
the graph.
• The optimal solution in this example is an extreme point of
the feasible region.
40 / 423
Questions
• What guarantee is there that an optimal solution exists?
• In fact, is there any a priori guarantee that a feasible solution
exists?
• Could there be more than one optimal solution?
41 / 423
§2.3 – ‘Pathological’ Cases (infinitely many solutions)
For two-variable problems the above method works well. However
‘pathological’ cases still exist.
What if z = 4x1 + 2x2 in the previous example?
There are infinitely many points in the feasible region where
z = 80.
42 / 423
§2.3 – “Pathological” Cases (infeasibility)
What if we had another constraint 4x1 + 3x2 ≥ 150?
In this case the feasible region is empty. Since there is no feasible
solution, there is definitely no optimal solution.
43 / 423
§2.3 – “Pathological” Cases (LP is unbounded)
What if we only had the constraint 4x1 + 3x2 ≥ 101?
In this case the feasible region is unbounded. For certain
optimization problems (for example, if we are minimizing
z = 4x1 + 3x2) there still can be an optimal solution. However, if
the objective function increases as we ‘head further into the
unbounded region’, no optimal solution exists.
44 / 423
§2.3 – “Pathological” Cases (feasible region is not closed)
Sometimes the feasible region is not closed. For example,
2x1 + x2 < 40
x1 + x2 < 30
x1 ≤ 15
x1 ≥ 0
x2 ≥ 0
feasible region
The ‘corner point’ at (10, 20) is not feasible.
BUT, we will only ever consider closed regions in practice.
45 / 423
§3 – The Geometry of LP in Higher Dimensions
Synopsis
• What are convex sets?
• What is a hyperplane?
• What is a halfspace?
• What is linear independence?
• What is a polytope?
• How do we get from standard form to canonical form?
• What are slack variables?
46 / 423
§3.1 – The Geometry in Higher Dimensions
Let x(1), ..., x(k) be a collection of points in Rn.
Definition 3.1
A point w such that
w = α1x
(1) + α2x
(2) + ...+ αkx
(k),
is a linear combination of x(1), ..., x(k).
Definition 3.2
A point w such that
w = α1x
(1) + α2x
(2) + ...+ αkx
(k),
where 0 ≤ αi ≤ 1 and
!
i αi = 1, is a convex combination of
x(1), ..., x(k).
47 / 423
Example 3.1
The point x = (3, 2, 1) is a linear combination of
x(1) = (1, 0, 0)
x(2) = (0, 1, 0)
x(3) = (0, 0, 1),
using the coefficients α1 = 3, α2 = 2 and α3 = 1.
The point y = (9, 4, 1) is not a linear combination of
x(1) = (3, 1, 0)
x(2) = (2, 4, 0)
x(3) = (4, 3, 0).
Why?
48 / 423
Example 3.2
The point x = (1/2, 1/3, 1/6) is a convex combination of
x(1) = (1, 0, 0)
x(2) = (0, 1, 0)
x(3) = (0, 0, 1),
using the coefficients α1 = 1/2, α2 = 1/3 and α3 = 1/6.
The point y = (3, 3, 2) is a convex combination of
x(1) = (3, 1, 0)
x(2) = (2, 4, 0)
x(3) = (4, 4, 6).
Why?
49 / 423
Definition 3.3
The line segment joining two points y, z in Rn is the collection of
all points x such that
x = λy + (1− λ)z
for some 0 ≤ λ ≤ 1.
z
y
50 / 423
Definition 3.4
A subset C of Rn is convex if for any two points y, z ∈ C and any
0 ≤ λ ≤ 1, the point
λy + (1− λ)z
is also in C.
Geometrically this means that, for any pair of points y, z ∈ C, the
line segment connecting these points is in C.
51 / 423
Theorem 3.1
Let C1, . . . ,Cn be a collection of convex sets. Then
n"
i=1
Ci
is a convex set.
That is, the intersection of any finite number of convex sets is a
convex set.
Proof.
??
52 / 423
Definition 3.5
A set of points H ⊆ Rn satisfying a linear equation of the form
a1x1 + a2x2 + ...+ anxn = b,
for (a1, a2, . . . , an) ∕= (0, 0, . . . , 0), is a hyperplane.
Hyperplanes are of dimension n− 1. (Why?)
53 / 423
Definition 3.6
The two closed half-spaces defined by the hyperplane
a1x1 + a2x2 + ...+ anxn = b
are the set defined by
a1x1 + a2x2 + ...+ anxn ≥ b (positive half space)
and
a1x1 + a2x2 + ...+ anxn ≤ b (negative half space).
54 / 423
Theorem 3.2
Hyperplanes and their half-spaces are convex sets.
Proof.
??
55 / 423
Definition 3.7
A polytope is a set that can be expressed as the intersection of a
finite number of closed half-spaces.
56 / 423
Definition 3.8
A polyhedron is a non-empty bounded polytope.
57 / 423
Definition 3.9
A point x of a convex set C is said to be an extreme point or
vertex of C if whenever we write
x = λy + (1− λ)z,
with y, z ∈ C and 0 < λ < 1, then y = z = x.
Geometrically, this means that there are no points y and z in C
(different from x) with x lying on the line segment connecting y
and z.
58 / 423
§3.2 – Linear Independence
Definition 3.10
A collection of vectors x(1), ..., x(s) in Rn is said to be linearly
independent if
α1x
(1) + α2x
(2) + ...+ αkx
(k) = 0,
implies that
α1 = α2 = ... = αk = 0.
Geometrically this means that no vector in the collection can be
expressed as a linear combination of the other vectors in the
collection.
59 / 423
Example 3.3
• The vectors (1, 0, 0), (0, 1, 0), (0, 0, 1) are linearly
independent.
• The vectors (2, 4, 3), (1, 2, 3), (1, 2, 0) are not linearly
independent.
60 / 423
Theorem 3.3
The set of feasible solutions of the standard LP problem defined by
a11x1 + a12x2 + ...+ a1nxn ≤ b1
a21x1 + a22x2 + ...+ a2nxn ≤ b2
...
...
...
am1x1 + am2x2 + ...+ amnxn ≤ bm
xj ≥ 0, j = 1, ..., n
is a convex polytope.
61 / 423
Proof.
By definition, a polytope is the intersection of finitely many
half-spaces. We have seen
• a half-space is a convex set,
• the intersection of any finite number of convex sets is a
convex set.
The fact that the polytope is convex follows.
62 / 423
Theorem 3.4
If a linear programming problem has exactly one optimal solution,
then this solution must be an extreme point of the feasible region.
Proof.
We shall prove this theorem by contradiction.
63 / 423
Assume, to the contrary, that the problem has exactly one optimal
solution, call it x, that is not an extreme point of the feasible
region.
Since x is not an extreme point, there are two distinct feasible
solutions, say y and z, which are not equal to x and a scalar λ,
0 < λ < 1, such that
x = λy + (1− λ)z.
64 / 423
If we rewrite the objective function in terms of y and z rather than
x, we obtain
f(x) = f(λy + (1− λ)z).
Hence
f(x) =
n#
j=1
cjxj
=
n#
j=1
cj [λyj + (1− λ)zj ]
= λ
n#
j=1
cjyj + (1− λ)
n#
j=1
cjzj
= λf(y) + (1− λ)f(z).
65 / 423
Now, because 0 < λ < 1, there are only three possibilities
1. f(y) < f(x) < f(z)
2. f(z) < f(x) < f(y)
3. f(y) = f(x) = f(z)
Since x is an optimal solution, the first two possibilities cannot
occur (why?). Thus, the third case must hold.
But this contradicts the assertion that x is the only optimal
solution. Our initial assumption that x is not an extreme point
must be wrong.
66 / 423
As an exercise, prove the following:
Theorem 3.5
If a linear programming problem has more than one optimal
solution, it must have infinitely many optimal solutions.
Furthermore, the set of optimal solutions is convex.
67 / 423
§3.3 – Standard Form
We would like to write our linear programs in a fixed format so
that it is easier to apply algorithms, such as the Simplex and
Interior Point methods. One such format is the standard form,
which means that
• It must be a maximisation problem;
• the constraints must be ≤;
• bi ≥ 0, for all i, the RHS must be positive;
• xi ≥ 0 for all i, all variables are non-negative.
68 / 423
In some cases, we are not be able to achieve bi ≥ 0 and ≤
simultaneously, and therefore the LP cannot be written in standard
form. Later we will introduce the so-called canonical form, and
show that any LP can be written in this form.
For the standard form conversion you should familiarise yourself
with converting from:
• a minimum objective function to a maximum;
• the case where some RHS constants are negative;
• some constraints are ≥;
• some variables are unrestricted.
69 / 423
Definition 3.11
An LP problem is in standard form if it is of the form:
max
x
z =
n#
j=1
cjxj
a11x1 + a12x2 + ...+ a1nxn ≤ b1
a21x1 + a22x2 + ...+ a2nxn ≤ b2
...
...
am1x1 + am2x2 + ...+ amnxn ≤ bm
xj ≥ 0, j = 1, ..., n
where bi ≥ 0 for all i.
70 / 423
With
A =
$%%%&
a11 a12 · · · a1n
a21 a22 · · · a2n
...
...
...
...
am1 am2 · · · amn
'((() , b =
$%%%&
b1
b2
...
bm
'((() , c =
$%%%&
c1
c2
...
cn
'((() , x =
$%%%&
x1
x2
...
xn
'((() ,
an LP in standard form can be written as:
max
x
z = cTx
Ax ≤ b
x ≥ 0
with b ≥ 0.
The matrix A is called the coefficient matrix of the linear program.
Often we write A = (aij).
71 / 423
§3.4 – Canonical Form
If we have an LP in standard form, we can turn it into an LP in
canonical form by introducing new variables xj ≥ 0,
j = n+ 1, ..., n+m that turn the inequalities into equations:
max z =
n#
j=1
cjxj
a11x1 + a12x2 + ...+ a1nxn + xn+1 = b1
a21x1 + a22x2 + ...+ a2nxn + xn+2 = b2
...
...
...
am1x1 + am2x2 + ...+ amnxn + xn+m = bm
xj ≥ 0, j = 1, ..., n+m
As in the standard form, bi ≥ 0 for all i.
72 / 423
The variables xj ≥ 0, j = n+ 1, ..., n+m are called slack
variables.
Physically, they denote the difference between the right hand side
and the left hand side of the inequalities in the standard form.
When a slack variable takes the value zero, the corresponding
inequality is satisfied with equality.
73 / 423
Lemma 3.1
When bi ≥ 0 for all i, the canonical form
max z =
n#
j=1
cjxj
a11x1 + a12x2 + ...+ a1nxn + xn+1 = b1
a21x1 + a22x2 + ...+ a2nxn + xn+2 = b2
...
...
...
am1x1 + am2x2 + ...+ amnxn + xn+m = bm
xj ≥ 0, j = 1, ..., n+m
has at least one feasible solution, x = (0, 0, 0, ..., 0, b1, b2, ..., bm)
This solution is obtained by:
• Setting all the original variables to zero
• Setting the slack variables to the respective right-hand side
values.
74 / 423
Definition 3.12
In general, an LP problem
max z =
n+m!
j=1
cjxj
a11x1 + ...+ a1nxn + a1,n+1xn+1 + ...+ a1,n+mxn+m = b1
a21x1 + ...+ a2nxn + a2,n+1xn+1 + ...+ a2,n+mxn+m = b2
...
...
...
am1x1 + ...+ amnxn + am,n+1xn+1 + ...+ am,n+mxn+m = bm
xj ≥ 0, j = 1, ..., n+m
is in canonical form if
• there exist m columns of A = (aij) which are
"###$
1
0
...
0
%&&&' ,
"###$
0
1
...
0
%&&&' ,
"###$
0
0
...
1
%&&&'
respectively.
• bi ≥ 0 for all i.
75 / 423
Example 3.4
The LP problem
max z = 2x1 − 3x2 + 4x3 + x5
2x1 + 0x2 + 1x3 + 3x4 + 0x5 = 6
−x1 + 1x2 + 0x3 + 0x4 + 0x5 = 2
2x1 + 0x2 + 0x3 + 6x4 + 1x5 = 2
x1, x2, x3, x4, x5 ≥ 0
is in canonical form because the 3rd, 2nd and 5th columns of the
coefficient matrix are, respectively,$&10
0
') ,
$&01
0
') ,
$&00
1
')
Note that (0, 2, 6, 0, 2) is a feasible solution to this LP problem.
76 / 423
Lemma 3.2
Suppose
max z =
n+m!
j=1
cjxj
a11x1 + ...+ a1nxn + a1,n+1xn+1 + ...+ a1,n+mxn+m = b1
...
...
...
am1x1 + ...+ amnxn + am,n+1xn+1 + ...+ am,n+mxn+m = bm
xj ≥ 0, j = 1, ..., n+m
is in canonical form and columns j1, . . . , jm are
"###$
1
0
...
0
%&&&' ,
"###$
0
1
...
0
%&&&' ,
"###$
0
0
...
1
%&&&'. Then
the LP has at least one feasible solution, which is given by:
• Setting xj1 = b1, . . . , xjm = bm
• Setting all other variables to zero.
77 / 423
§4 – Basic Feasible Solutions
Synopsis
• What is a basic feasible solution (bfs);
• Extreme points correspond to basic feasible solutions, and vice
versa.
78 / 423
§4.1 – Basic Feasible Solutions
Consider the system of m linear equations with k variables, where
k ≥ m,
a11x1 + ...+ a1kxk = b1
...
...
...
am1x1 + ...+ amkxk = bm
and assume that (A = (aij) has rank m).
79 / 423
Definition 4.1
A basic solution to the system is constructed by
• Selecting m columns whose coefficients are linearly
independent
• Setting the k −m variables that correspond to the other
columns to zero.
• Solving the m×m system of equations that remain to assign
values to the variables that correspond to the selected
columns;
The variables corresponding to the selected m columns are basic
variables, and other variables are nonbasic variables.
80 / 423
Definition 4.2
A basic feasible solution of an LP problem
max /min z = cx
Ax = b
x ≥ 0,
where rank(A) is equal to the number of rows of A, is a basic
solution of the linear equation system Ax = b that also satisfies
the non-negativity constraints xi ≥ 0 for all i.
• Any basic feasible solution is a feasible solution, but the
converse is not true.
81 / 423
Example 4.1
Standard Form
The LP
max
x
3x1 + 4x2
2x1 + x2 ≤ 4
x1 + 2x2 ≤ 3
x1, x2 ≥ 0,
is in standard form.
82 / 423
Canonical Form
The corresponding canonical form is
max
x
3x1 + 4x2
2x1 + x2 + x3 = 4
x1 + 2x2 + x4 = 3
x1, x2, x3, x4 ≥ 0
The ‘trivial basic feasible solution’, derived by selecting variables
x3 and x4 to be basic is x = (0, 0, 4, 3).
83 / 423
What are the other basic feasible solutions?
Suppose we select x2 and x3 to be basic. Then, the reduced
system is
x2 + x3 = 4
2x2 = 3.
This yields the basic feasible solution
x = (0, 3/2, 5/2, 0).
84 / 423
If we select x1 and x2 to be the basic variables, the reduced system
is
2x1 + x2 = 4
x1 + 2x2 = 3.
This yields the basic feasible solution
x = (5/3, 2/3, 0, 0).
85 / 423
If we select x1 and x3 as basic variables, the reduced system is
2x1 + x3 = 4
x1 = 3
This yields the basic solution
x = (3, 0,−2, 0)
which is not feasible.
The two other basic solutions are (2, 0, 0, 1), which is basic
feasible, and (0, 4, 0,−5), which is not basic feasible.
86 / 423
Lemma 4.1
(Lemma 3.2 restated) If
max z =
n+m!
j=1
cjxj
a11x1 + ...+ a1nxn + a1,n+1xn+1 + ...+ a1,n+mxn+m = b1
...
...
...
am1x1 + ...+ amnxn + am,n+1xn+1 + ...+ am,n+mxn+m = bm
xj ≥ 0, j = 1, ..., n+m
is in canonical form and columns j1, j2, . . . , jm are
"###$
1
0
...
0
%&&&' ,
"###$
0
1
...
0
%&&&' ,
"###$
0
0
...
1
%&&&'.
Then x = (x1, . . . , xn, xn+1, . . . , xn+m) ∈ Rn+m defined by
xj1 = b1, . . . , xjm = bm, all other xj = 0
is a basic feasible solution.
87 / 423
Example 4.2
The LP problem
max z = 2x1 − 3x2 + 4x3 + x5
2x1 + 0x2 + 1x3 + 3x4 + 0x5 = 6
−x1 + 1x2 + 0x3 + 0x4 + 0x5 = 2
2x1 + 0x2 + 0x3 + 6x4 + 1x5 = 2
x1, x2, x3, x4, x5 ≥ 0
is in canonical form.
So x = (0, 2, 6, 0, 2) is a basic feasible solution, with x2, x3 and x5
basic variables.
88 / 423
§4.2 – Extreme Points and Basic Feasible Solutions
Why are we interested in basic feasible solutions?
It is because they correspond to extreme points.
Geometry Algebra
Extreme Point ⇐⇒ Basic Feasible Solution
89 / 423
Consider the standard form linear programming problem:
max
x
z =
n#
j=1
cjxj
such that
a11x1 + ...+ a1nxn ≤ b1
a21x1 + ....+ a2nxn ≤ b2
...
...
...
am1x1 + ...+ amnxn ≤ bm
xj ≥ 0, j = 1, ..., n.
The feasible region is a convex polyhedron S, which is a subset of
Rn.
90 / 423
The canonical form of this linear program is
max
x
z =
n#
j=1
cjxj
such that
a11x1 + ...+ a1nxn + ...+ a1,n+mxn+m = b1
a21x1 + ....+ a2nxn + ...+ a2,n+mxn+m = b2
...
...
...
am1x1 + ...+ amnxn + ...+ am,n+mxn+m = bm
xj ≥ 0, j = 1, ..., n+m.
where (aij) for i = 1, . . .m and j = n+ 1, . . . n+m forms the
m×m identity matrix.
91 / 423
The coefficient matrix (aij) for i = 1, . . .m and j = 1, . . . n+m
clearly has rank m (since the final m columns are linearly
independent).
By the definition of standard form, we also know that bi ≥ 0, for
all i.
92 / 423
Theorem 4.1
The vector x˜ ≡ (x1, . . . , xn) in Rn is an extreme point of S if and
only if x in Rn+m is a basic feasible solution of the set of
equations
a11x1 + ...+ a1nxn + ...+ a1,n+mxn+m = b1
a21x1 + ....+ a2nxn + ...+ a2,n+mxn+m = b2
...
...
...
am1x1 + ...+ amnxn + ...+ am,n+mxn+m = bm
xj ≥ 0, j = 1, ..., n+m.
93 / 423
Proof:
This is an ‘if and only if’ theorem. Frequently we have to prove
theorems like this by dealing with the ‘if’ and ‘only if’ parts
separately. We do this here.
94 / 423
‘if’ ⇐
In this part we have to show that if x = (x1, . . . , xn+m) is a basic
feasible solution of the equations then x˜ is an extreme point of S.
We use ideas that we shall return to later in the subject.
Corresponding to any basic feasible solution x ∈ Rn+m, we can
partition the set of indices into those corresponding to basic
variables, which correspond to the m linearly independent columns
of the coefficient matrix, and those corresponding to non-basic
variables which are set to be zero. Thus we can write
x = [xB, 0].
A slight subtlety is that some of the components in xB can also
be zero.
95 / 423
We can also partition A according to basic and non-basic variables,
so that
A = [AB, ANB].
Then
Ax = b
is the same as
[AB, ANB]
*
xB
0
+
= b,
which implies that
ABxB = b.
By definition, the columns of AB are linearly independent and so
AB is nonsingular. Thus
xB = A
−1
B b.
96 / 423
Let y ≡ (y1, . . . , yn+m) be a feasible solution to the set of
equations. Because the final m columns of the coefficient matrix
correspond to the identity matrix, we know that, for any
i = 1, . . . ,m,
yn+i = bi −
n#
j=1
aijyj .
This means the value of y˜ ≡ (y1, . . . , yn) determines the value of
y, and vice versa.
With this machinery, we can now prove the ‘if’ part. We use proof
by contradiction.
97 / 423
Let x ≡ (x1, . . . , xn+m) be a basic feasible solution to the set of
equations, and assume that x˜ ≡ (x1, . . . , xn) in Rn is not an
extreme point of S.
Then there exist feasible y˜ = (y1, . . . , yn) and z˜ = (z1, . . . , zn),
not equal to x˜, and λ ∈ (0, 1) such that
x˜ = λy˜ + (1− λ)z˜.
Since y˜ and z˜ are feasible, we can add nonnegative slack variables
(yn+1, . . . , yn+m) and (zn+1, . . . , zn+m), as on the previous slide,
such that
Ay = b
Az = b.
where y = (y1, . . . , yn+m) and z = (z1, . . . , zn+m).
98 / 423
Moreover, since, for any i = 1, . . . ,m,
xn+i = bi −
n#
j=1
aijxj ,
we know that
xn+i = λyn+i + (1− λ)zn+i
and so we can write
x = λy + (1− λ)z.
99 / 423
Partition y and z according to the basic/non-basic partition of x so
that y = [yB, yNB] and z = [zB, zNB].
Looking first at the non-basic variables, we observe that
0 = λyNB + (1− λ)zNB.
Furthermore yNB and zNB are nonnegative. This implies that
yNB = zNB = 0.
100 / 423
It follows that
[AB, ANB]
*
yB
0
+
= b
and
[AB, ANB]
*
zB
0
+
= b.
which, because AB is nonsingular, implies that
yB = zB = [AB]
−1b.
101 / 423
Thus x, y and z are all the same, which implies that (x1, . . . , xn),
(y1, . . . , yn) and (z1, . . . , zn) are all the same, and this contradicts
our assumption that y˜ and z˜ are not equal to x˜.
We conclude that x˜ = (x1, . . . , xn) cannot be written as a convex
combination of two different feasible points. It must therefore be
an extreme point.
102 / 423
‘only if’ ⇒
In this part we have to show that if x˜ = (x1, . . . , xn) is an extreme
point of S, then x = (x1, . . . , xn+m), where
xn+i = bi −
n#
j=1
aijxj ,
is a basic feasible solution of the constraint equations in the
canonical form.
103 / 423
We start by partitioning x in a different manner from the ‘if’ part
of the proof.
Here we partition x into [x+, 0], where the components in x+ are
strictly positive.
We also partition A in the same way so that A = [A+, A0] and
[A+, A0]
*
x+
0
+
= b.
So we have,
A+x+ = b.
104 / 423
We want to show that the columns of A+ are linearly independent.
Again, we do this using a proof by contradiction.
Assume that the columns of A+ are not linearly independent.
Then there is a non-zero w+ (not necessarily nonnegative) such
that
A+w+ = 0.
105 / 423
Since x+ > 0, there exists # such that x+ > #w+ and x+ > −#w+.
(These inequalities should be read coordinate-wise.)
We have
[A+, A0]
*
x+ + #w+
0
+
= A+(x+ + #w+) = A+x+ + #A+w+ = b.
Since x+ + #w+ > 0, (x+ + #w+, 0) is a feasible solution to the
constraint equations, and its first n components correspond to a
point in the feasible region S.
Similarly, since x+ − #w+ > 0, the first n components of
(x+ − #w+, 0) correspond to a point in the feasible region S.
106 / 423
Now
x+ =
1
2
(x+ + #w+) +
1
2
(x+ − #w+),
so that,
[x+, 0] =
1
2
[x+ + #w+, 0] +
1
2
[x+ − #w+, 0]
and x can be written as a convex combination of two distinct
feasible points.
Furthermore, the points [x+, 0], [x+ + #w+, 0] and [x+ − #w+, 0]
must differ in at least one of their first n components.
So x˜ can be written as a convex combination of two other feasible
points in S, which contradicts the assumption that it is an extreme
point of S.
107 / 423
So the columns of A+ must be linearly independent. Since there
cannot be any more than m linearly independent vectors in Rm,
this means that there cannot be any more than m positive
components in x.
We’d like to be able to say immediately that this means that x is a
basic feasible solution, but we can’t quite say this because it is
possible that there are s < m columns in A+, when there are less
than m positive components in x.
However, if this is the case, there must m− s columns of A0 that
we can add to those of A+ to form a linearly-independent set of m
columns, because we know that there are m linearly independent
columns of A overall, and we see that x is a basic feasible solution
with some of its basic variables equal to zero.
This proves the ‘only if’ part.
108 / 423
§5 – Fundamental Theorem of Linear Programming
Synopsis
• Fundamental theorem of Linear Programming
109 / 423
Theorem 5.1 (Fundamental Theorem of Linear Programming)
Consider the linear programming problem in canonical form
max
x
z =
n#
j=1
cjxj
a11x1 + ...+ a1(n+m)xn+m = b1
a21x1 + ....+ a2(n+m)xn+m = b2
...
...
...
am1x1 + ...+ am(n+m)xn+m = bm
xj ≥ 0, j = 1, . . . , n+m
(a) If this problem has a feasible solution, then it must have a
basic feasible solution.
(b) If this problem has an optimal solution, then it must have an
optimal basic feasible solution.
110 / 423
Proof
Part (a)
Let x be a feasible solution. We need to prove that the problem
has a basic feasible solution.
As in the proof of the ‘only if’ part of the previous theorem, we
partition x into a strictly positive part x+, with s entries, and a
zero part. Without loss of generality we may assume that this
partition is x = [x+, 0].
We also partition A = [A+, A0] in the same way. Then Ax = b
becomes ,
A+, A0
- *x+
0
+
= b,
that is,
A+x+ = b.
111 / 423
Case (1): The columns of A+ are linearly independent.
In this case, s ≤ m, and either
• s = m, in which case x is a basic feasible solution by
definition, or
• s < m, in which case we can add m− s further columns to
make a linearly independent set as we did in the ‘only if’ part
of the previous theorem.
In either case we obtain a basic feasible solution.
112 / 423
Case (2): The columns of A+ are linearly dependent.
In this case we will iteratively construct a new feasible solution
such that the corresponding A+ (for the new feasible solution) has
independent columns. In this way we reduce to Case (1).
Since the columns of A+ are linearly dependent, there is a
non-zero w+ such that
A+w+ = 0.
Without loss of generality, we can assume wj > 0 for at least one
j.
113 / 423
As before, we have
[A+, A0]
*
x+ − #w+
0
+
= A+(x+ − #w+) = b
for any value of #.
In addition, as long as # ≥ 0 and x+ ≥ #w+ (that is, xj ≥ #wj for
each j with wj > 0), (x
+ − #w+, 0) is nonnegative.
Now choose
#∗ = min
.
xj
wj
: wj > 0
/
(> 0).
Then (x+ − #∗w+, 0) is a feasible solution. Moreover, one
component of x+ − #∗w+ is zero, with the rest nonnegative.
114 / 423
Thus we have constructed a new feasible solution (x+ − #w+, 0)
whose number of positive components is reduced by at least one
from that of x.
If the columns of the new A+ with respect to this new feasible
solution are linearly independent, the LP problem has a feasible
solution by Case (1).
Otherwise, we replace x by (x+− #w+, 0) and repeat the argument
above. We then get a third feasible solution whose number of
positive components is reduced by at least one from that of
(x+ − #w+, 0).
115 / 423
Continue like this, all the time reducing the number of positive
components by at least one.
This process must terminate in a finite number of iterations
(why?) with a feasible solution such that the columns of the
corresponding A+ are linearly independent.
We then conclude by Case (1) that the LP problem has at least
one basic feasible solution.
116 / 423
Part (b)
Starting with an optimal feasible solution x, we can construct an
optimal basic feasible solution in exactly the same way as in Part
(a).
Case (1): The columns of A+ are linearly independent.
From the proof of Part (a) we see that x is an optimal basic
feasible solution.
117 / 423
Case (2): The columns of A+ are linearly dependent.
Using the notation from Part (a),
cT
*
x+ − #w+
0
+
=
s#
j=1
cj(xj − #wj)
=
s#
j=1
cjxj − #
s#
j=1
cjwj
= cTx− #
s#
j=1
cjwj .
Since x is an optimal solution (that is, cTx achieves the
maximum) and (x+ − #w+, 0) is a feasible solution, we have
s#
j=1
cjwj ≥ 0.
118 / 423
Take a δ such that
0 < δ ≤ min
.−xj
wj
: wj < 0
/
.
(The right-hand side is defined as ∞ if there are no wj that are
negative.)
Following the logic of Case (2) of Part (a), we can show that
(x+ + δw+, 0) is a feasible solution whose objective value is
cT
*
x+ + δw+
0
+
=
s#
j=1
cj(xj + δwj)
= cTx+ δ
s#
j=1
cjwj
Since x is optimal and δ > 0, we must have
s#
j=1
cjwj ≤ 0.
119 / 423
Combining the two inequalities above, we have
s#
j=1
cjwj = 0,
which implies
cT
*
x+ − #w+
0
+
= cTx.
In other words, the value of the objective function is the same at
(x+ − #w+, 0) as at (x+, 0).
Following the same recursive procedure as in Case (2) of Part (a),
we can construct a basic feasible solution with the same objective
value as x (and so is optimal) such that the columns of the
corresponding A+ are linearly independent, reducing to Case
(1).
120 / 423
Corollary 5.1
If the feasible region of the linear programming problem is not
empty then it must have at least one extreme point.
Corollary 5.2
The feasible region possesses at most a finite number of extreme
points (Can you suggest an upper bound?)
Corollary 5.3
If the linear programming problem possesses an optimal solution,
then there is an optimal solution which is an extreme point of the
feasible region.
121 / 423
§6 – Preview of the Simplex Method
Synopsis
• Methods for solving LPs
• Idea of the Simplex Method
• An example
• The Simplex Method
122 / 423
§6.1 – Methods for Solving LPs
• Given a linear program with an optimal solution, at least one
of the optimal solutions is an extreme point of the feasible
region.
• So how about solving the problem by enumerating all the
extreme points of the feasible region?
123 / 423
Since extreme points of the feasible region correspond to basic
feasible solutions, for an LP in canonical form with n+m variables
and m constraints, there are at most0
n+m
m
1
=
(n+m)!
m!n!
extreme points.
For large n and m, this yields a very large number of possible
extreme points. This is an example of the Curse of Dimensionality.
For example for n = m = 50, the maximum number of extreme
points is 1029.
It is impractical to find an optimal solution by enumerating all
extreme points.
124 / 423
Methods for Solving LPs
• Simplex, [Dantzig, 1947]
◦ Starts at an extreme point
and moves to another
extreme point with
improved objective value.
• Interior Point [Karmarkar,
1984]
◦ Moves from the (relative)
interior of the region or
faces, towards the optimal
solution.
125 / 423
§6.2 – Idea of the Simplex Method
The Simplex Method is an algorithm that ‘jumps’ from one
extreme point of the feasible region to another in such a way that
the value of the objective function is improved (or at least does not
become worse) at each stage.
126 / 423
Idea of the Simplex Method
We initialise with a particular basic feasible point and have a
criterion to check whether it is optimal. If it is, then we stop.
Otherwise we perform an iteration to generate a new basic feasible
point and check again. We keep on iterating until the algorithm
terminates.
Input: A Basic Feasible Solution
if Optimality criterion is satisfied then
Exit. We are done.
else
Generate a new Basic Feasible Solution that is at least as
good as the old one.
end if
The iterative step involves moving along an edge of the feasible
region to get to a neighbouring extreme point.
127 / 423
§6.3 – An Example
max z = 3x1 + 2x2
2x1 − x2 ≤ 1
−3x1 + 4x2 ≤ 13
x1 + x2 ≤ 5
x1, x2 ≥ 0
This LP problem is in standard form.
Convert it to canonical form by introducing slack variables.
128 / 423
max z = 3x1 + 2x2
2x1 − x2 + x3 = 1
−3x1 + 4x2 + x4 = 13
x1 + x2 + x5 = 5
x1, x2, x3, x4, x5 ≥ 0
Initial basic variables: x3, x4, x5 (which are the slack variables)
Initial bfs: x = (0, 0, 1, 13, 5).
Initial value of objective function: z = 3 · 0 + 2 · 0 = 0
If either x1 or x2 were to become a basic variable and take on a
positive value, the value of z would increase. Why?
How about if the cost coefficient of x1 in z is negative?
129 / 423
-1 2
x1−2x2=2
(0, 0)
x1+x2=5
x2
x1
-3x1+4x2=13
2x1-x2=1
(1/2, 0)
(2, 3)
(1, 4)
(0, 13/4)
The initial bfs x = (0, 0, 1, 13, 5) corresponds to the extreme point
(0, 0) in R2. 130 / 423
max z = 3x1 + 2x2
2x1 − x2 + x3 = 1
−3x1 + 4x2 + x4 = 13
x1 + x2 + x5 = 5
x1, x2, x3, x4, x5 ≥ 0
Which one of x1 and x2 should we choose?
Close to the point (0, 0), there are 3 units of increase in z per unit
increase of x1, and 2 units of increase in z per unit increase of x2.
Either x1 or x2 can be the entering basic variable.
However, it seems ‘reasonable’ to believe that bringing in x1 will
lead to a greater increase in the objective function z.
131 / 423
max z = 3x1 + 2x2
2x1 − x2 + x3 = 1
−3x1 + 4x2 + x4 = 13
x1 + x2 + x5 = 5
x1, x2, x3, x4, x5 ≥ 0
Greedy Rule
If z is written in terms of the current nonbasic variables, then the
largest positive cost coefficient determines the entering (basic)
variable.
So in this example we choose x1 to be the entering basic variable.
If we write the objective function z = 3x1 + 2x2 as
z − 3x1 − 2x2 = 0,
this means we choose the most negative coefficient in this
equation.
132 / 423
How about if all cost coefficients are negative or zero? For
example,
max z = −2x1 − 4x2
2x1 − x2 + x3 = 1
−3x1 + 4x2 + x4 = 13
x1 + x2 + x5 = 5
x1, x2, x3, x4, x5 ≥ 0
The current value of z is maximum because allowing any nonbasic
variable to become positive would not increase the value of z.
Optimality Criterion
If z is written in terms of the current non-basic variables and if all
cost coefficients are negative or zero, then the current value of z is
maximum and the current bfs is optimal.
133 / 423
Returning to our example ...
max z = 3x1 + 2x2
2x1 − x2 + x3 = 1
−3x1 + 4x2 + x4 = 13
x1 + x2 + x5 = 5
x1, x2, x3, x4, x5 ≥ 0
We want to increase the entering variable x1 as much as possible.
But we do not want to leave the feasible region.
What is the maximum increase of x1 without causing any basic
variable to become negative?
134 / 423
max z = 3x1 + 2x2
2x1 − x2 + x3 = 1
−3x1 + 4x2 + x4 = 13
x1 + x2 + x5 = 5
x1, x2, x3, x4, x5 ≥ 0
Since x2 remains non-basic, its value is equal to zero. So we have
x3 = 1− 2x1 ≥ 0⇒ 2x1 ≤ 1⇒ x1 ≤ 1
2
x4 = 13 + 3x1 ≥ 0⇒ −3x1 ≤ 13, satisfied by any x1 ≥ 0
x5 = 5− x1 ≥ 0⇒ x1 ≤ 5⇒ x1 ≤ 5
1
The largest allowable positive value for x1 is 1/2.
If x1 = 1/2, then x3 = 0 and so x3 becomes the leaving (basic)
variable.
135 / 423
Ratio Test
Assume that the entering basic variable has been chosen. For every
equation for which the coefficient of the entering variable is
positive, form the ratio of the resource value to that coefficient.
Select the equation that produces the smallest of these ratios.
The basic variable for the selected equation is the leaving (basic)
variable.
136 / 423
Restoring the Canonical Form
Restore the canonical form with basic variables x1, x4 and x5 by
pivoting on 2x1 in the first equation.
max z = 3x1 + 2x2
2x1 − x2 + x3 = 1
−3x1 + 4x2 + x4 = 13
x1 + x2 + x5 = 5
x1, x2, x3, x4, x5 ≥ 0
x1 − 1
2
x2 +
1
2
x3 =
1
2
5
2
x2 +
3
2
x3 + x4 =
29
2
3
2
x2 − 1
2
x3 + x5 =
9
2
So the bfs is now
2
1
2 , 0, 0,
29
2 ,
9
2
3
and the objective function value is
z = 3(12) + 2(0) =
3
2 . 137 / 423
x1 − 1
2
x2 +
1
2
x3 =
1
2
5
2
x2 +
3
2
x3 + x4 =
29
2
3
2
x2 − 1
2
x3 + x5 =
9
2
Express z in terms of the non-basic variables x2 and x3
The first constraint leads to x1 =
1
2 +
1
2x2 − 12x3 and so
z = 3x1 + 2x2 = 3
0
1
2
+
1
2
x2 − 1
2
x3
1
+ 2x2 =
3
2
+
7
2
x2 − 3
2
x3
that is
z − 3
2
− 7
2
x2 +
3
2
x3 = 0
Exercise: verify that this can be obtained from z − 3x1 − 2x2 = 0
by pivoting on 2x1.
138 / 423
Now we have a new canonical system:
max z =
3
2
+
7
2
x2 − 3
2
x3
x1 − 1
2
x2 +
1
2
x3 =
1
2
5
2
x2 +
3
2
x3 + x4 =
29
2
3
2
x2 − 1
2
x3 + x5 =
9
2
x1, x2, x3, x4, x5 ≥ 0
Current basic variables: x1, x4, x5.
Current bfs:
2
1
2 , 0, 0,
29
2 ,
9
2
3
Current value z = 3/2
139 / 423
-1 2
x1−2x2=2
(0, 0)
x1+x2=5
x2
x1
-3x1+4x2=13
2x1-x2=1
(1/2, 0)
(2, 3)
(1, 4)
(0, 13/4)
The current bfs
2
1
2 , 0, 0,
29
2 ,
9
2
3
corresponds to the extreme point
(12 , 0) in R
2.
What should we do with this new canonical form? Repeat the
procedure! 140 / 423
§7 – The Simplex Method
Input: A max Linear Program in standard form
Output: An optimal solution
1: Construct canonical form and obtain a basic feasible solution
2: while There are negative reduced costs do
3: Select entering variable with most negative reduced cost.
4: Select leaving variable using the ratio test.
5: Pivot the tableau.
6: end while
Here the reduced costs are the coefficients of the variables when
we write the objective function as an equation such that all terms
are on the same side and the coefficient of z is equal to 1.
141 / 423
More Details
To come...
• Initialisation
• Streamlining the process
• Optimality test
• Iteration
• LP with no feasible solution
• LP with no optimal solution
• LP with multiple optimal solutions
142 / 423
§7.1 – Initialisation
We start with an LP in standard form:
max
x
z =
n#
j=1
cjxj
a11x1 + a12x2 + ...+ a1nxn ≤ b1
a21x1 + a22x2 + ...+ a2nxn ≤ b2
...
...
...
am1x1 + am2x2 + ...+ amnxn ≤ bm
xj ≥ 0, j = 1, ..., n
143 / 423
To initialise the Simplex Algorithm, we transform the LP into
canonical form by introducing slack variables.
max
x
z =
n#
j=1
cjxj
a11x1 + a12x2 + ...+ a1nxn + xn+1 = b1
a21x1 + a22x2 + ...+ a2nxn + xn+2 = b2
...
...
...
am1x1 + am2x2 + ...+ amnxn + xn+m = bm
xj ≥ 0, j = 1, ..., n+m
If we use the slack variables xn+1 . . . , xn+m as basic variables, we
obtain the initial basic feasible solution, namely
(0, . . . , 0, b1, . . . , bm).
144 / 423
Example 7.1
max z = 4x1 + 3x2
2x1 + x2 ≤ 40
x1 + x2 ≤ 30
x1 ≤ 15
x1, x2 ≥ 0
can be rewritten as
145 / 423
max z = 4x1 + 3x2
2x1 + x2 + x3 = 40
x1 + x2 + x4 = 30
x1 + x5 = 15
x1, x2, x3, x4, x5 ≥ 0
where x3, x4, x5 are slack variables.
The initial basic feasible solution is x = (0, 0, 40, 30, 15).
146 / 423
Summary of the Initialisation Step
• Convert the standard LP problem to canonical form
• Select the slack variables as basic variables
• Obtain the initial basic feasible solution
Comments:
• This is simple.
• This initial basic feasible solution is not necessarily a good
choice: it can be (very) far from an optimal solution.
147 / 423
§7.2 – Iteration
• We are at an extreme point of the feasible region.
• We want to move to an adjacent extreme point.
• We want to move to a better extreme point.
Observation:
• Basic feasible solutions which differ only in that one basic
variable is interchanged with one non-basic variable
correspond to adjacent feasible extreme points.
148 / 423
Moving to an adjacent extreme point
• Step 1: Select which non-basic variable becomes basic
• Step 2: Determine which basic variable becomes non-basic
• Step 3: Reconstruct a new canonical form reflecting this
change
149 / 423
§7.3 – The Simplex Tableau
• It is convenient to describe how the Simplex Method works
using a table (= tableau).
• There are a number of different layouts for these tables.
• In this subject, all of us should use the layout specified in
these lecture slides.
150 / 423
• It is convenient to incorporate the objective function into the
formulation as a functional constraint.
• We can do this by viewing z, the value of the objective
function, as a decision variable, and introduce the additional
constraint
z =
n#
j=1
cjxj
or equivalently
z − c1x1 − c2x2 − ...− cnxn = 0.
151 / 423
The Initial Tableau
BV x1 ... xn xn+1 ... xn+m RHS
xn+1 a11 ... a1n 1 ... 0 b1
xn+2 a21 ... a2n 0 ... 0 b2
· · ·
xn+m am1 ... amn 0 ... 1 bm
z −c1 ... −cn 0 ... 0 0
• We refer to the last row as the z-row.
• We omitted the z-column since it is (0, 0, . . . , 1)T , and it
stays like this along the way.
• The reduced costs of x1, . . . , xn are −c1, . . . ,−cn,
respectively, and the reduced costs of the basic variables
xn+1, . . . , xn+m are zero. (See Section 6.4.)
• The tableau above is in canonical form.
• In general, a tableau is said to be in canonical form if
◦ all bi ≥ 0,
◦ m of the columns form the m×m identity matrix (not
necessarily in order),
◦ the reduced costs for basic variables are all equal to zero. 152 / 423
Example 7.2
max z = 4x1 + 3x2
such that
2x1 + x2 + x3 = 40
x1 + x2 + x4 = 30
x1 + x5 = 15
x1, x2, x3, x4, x5 ≥ 0
can be transformed to
153 / 423
max z
such that
2x1 + x2 + x3 = 40
x1 + x2 + x4 = 30
x1 + x5 = 15
z − 4x1 − 3x2 = 0
x1, x2, x3, x4, x5 ≥ 0.
154 / 423
The Initial Tableau
BV x1 x2 x3 x4 x5 RHS
x3 2 1 1 0 0 40
x4 1 1 0 1 0 30
x5 1 0 0 0 1 15
z −4 −3 0 0 0 0
155 / 423
§7.4 – Selecting a New Basic Variable (Entering Variable)
Question: Which one of the current non-basic variables should we
change so that it becomes basic?
Answer: The z-row
z − c1x1 − c2x2 − ...− cnxn = 0
tells us how the value of the objective function z changes initially,
as we change the decision variables.
156 / 423
Initially all x1, . . . , xn are non-basic variables. So if we decide to
add xj to the basis, then it will take on some nonnegative value.
The objective function will not improve unless cj > 0 (that is, the
reduced cost −cj < 0). Thus we want to select a non-basic
variable with a negative (or at least non-positive) reduced cost.
Moreover, since we are trying to maximize the objective function,
we select a non-basic variable with the largest value of cj , that is
the most negative reduced cost.
157 / 423
Greedy Rule (Restated)
In a maximisation problem, we choose a currently non-basic
variable with the most negative reduced cost to be the new basic
variable (often called the entering variable).
That is, we bring a non-basic variable with the most negative
reduced cost to the basis.
If there are two or more such non-basic variables, choose any one
of them.
However, to prevent cycling we usually choose the one with
smallest subscript. This issue will be discussed later.
158 / 423
Example (continued)
BV x1 x2 x3 x4 x5 RHS
x3 2 1 1 0 0 40
x4 1 1 0 1 0 30
x5 1 0 0 0 1 15
z −4 −3 0 0 0 0
The most negative reduced cost in the z-row is −4, corresponding
to j = 1.
Thus, we select x1 as the new basic variable.
159 / 423
§7.5 – Determining the New Non-basic Variable (Leaving
Variable)
Suppose we decided to select xj as a new basic variable.
Since the number of basic variables is fixed at m, we have to take
one variable out of the basis.
Which one?
Observation
• As we increase xj from zero, we can expect that, sooner or
later, one or more of the basic variables will become negative.
• We take the first such variable out of the basis and set it to
zero.
160 / 423
Example (continued)
2x1 + x2 + x3 = 40
x1 + x2 + x4 = 30
x1 + x5 = 15
• Suppose we select x1 as the new basic variable.
• Since x2 is a non-basic variable and is staying non-basic, its
value is zero. Thus the above system can be simplified.
161 / 423
Each equation involves only two variables:
• The new basic variable x1
• The old basic variable associated with the respective
constraint.
2x1 + x3 = 40
x1 + x4 = 30
x1 + x5 = 15.
i.e.
x3 = 40− 2x1
x4 = 30− x1
x5 = 15− x1.
162 / 423
Since we need x3, x4, x5 ≥ 0, we need
2x1 ≤ 40
x1 ≤ 30
x1 ≤ 15
That is,
x1 ≤ 40
2
x1 ≤ 30
1
x1 ≤ 15
1
If the last inequality holds, then the other two inequalities hold. So
we can increase x1 up to 15. And if x1 = 15, then x5 = 0 and so
we should take x5 out of the basis.
163 / 423
More generally ....
If we have selected xj as the new basic variable, then the ith
functional constraint becomes
aijxj + xi = bi
where xi is an old basic variable.
We need
xi = bi − aijxj ≥ 0.
If aij ≤ 0, then this is always satisfied since xj ≥ 0 and bi ≥ 0.
If aij > 0, then the above is equivalent to xj ≤ biaij . So we need
xj ≤ min
i
.
bi
aij
: aij > 0
/
(maximum allowable increase of xj)
If this minimum is bi∗ai∗j
and if we set xj =
bi∗
ai∗j
, then xi∗ = 0 and
so xi∗ comes out of the basis.
164 / 423
Ratio Test (Restated)
Given that we have selected the new basic variable xj , we take out
of the basis the old basic variable corresponding to row i such that
aij > 0
and the ratio
Ratioi :=
bi
aij
attains its smallest value.
Such a variable to be taken out of the basis is often called the
leaving variable.
165 / 423
Example (continued)
BV x1 x2 x3 x4 x5 RHS
x3 2 1 1 0 0 40
x4 1 1 0 1 0 30
x5 1 0 0 0 1 15
z −4 −3 0 0 0 0
Ratio1 =
40
2
= 20
Ratio2 =
30
1
= 30
Ratio3 =
15
1
= 15.
Thus we take x5 out of the basis.
166 / 423
§7.6 – Restoring the Canonical Form – Pivot Operation
• We interchanged a basic variable with a non-basic variable.
• So we have a new basis.
• We have to construct the simplex tableau for the new set-up.
• This is done by performing one pivot operation.
• We choose the entry in the column of the entering variable
and the row of the leaving variable as the pivot entry.
• We make this pivot entry 1 and all other entries in the same
column 0 by elementary row operations (just as in linear
algebra).
167 / 423
Example (continued)
Old Tableau
BV x1 x2 x3 x4 x5 RHS
x3 2 1 1 0 0 40
x4 1 1 0 1 0 30
x5 1 0 0 0 1 15
z −4 −3 0 0 0 0
The pivot entry is 1 in the x1-column and x5-row.
Pivoting on this entry we obtain ......
168 / 423
Example (continued)
New Tableau
BV x1 x2 x3 x4 x5 RHS
x3 0 1 1 0 −2 10
x4 0 1 0 1 −1 15
x1 1 0 0 0 1 15
z 0 −3 0 0 4 60
Note that this new tableau is in canonical form. Good! We will
keep canonical form along the way!
Note the change of bases.
Current basis: x3, x4, x1
Previous basis: x3, x4, x5
169 / 423
How do we “read” a simplex tableau?
BV x1 x2 x3 x4 x5 RHS
x3 0 1 1 0 −2 10
x4 0 1 0 1 −1 15
x1 1 0 0 0 1 15
z 0 −3 0 0 4 60
New basis:
x3, x4, x1.
New basic feasible solution:
x = (15, 0, 10, 15, 0).
New value of the objective function:
z = 60.
170 / 423
§7.7 – Optimality Criterion (Restated)
If there are non-basic variables with negative reduced costs, then
we have a chance to improve the objective function by adding one
of these variables to the basis.
On the other hand, if all the non-basic variables have nonnegative
coefficients in the z-row of the simplex tableau, then we cannot
improve the objective function and we stop.
If all the reduced costs are nonnegative, the current solution is
optimal.
171 / 423
Example (continued)
BV x1 x2 x3 x4 x5 RHS
x3 0 1 1 0 −2 10
x4 0 1 0 1 −1 15
x1 1 0 0 0 1 15
z 0 −3 0 0 4 60
• Optimality Test: There is at least one negative reduced cost.
Hence we have to continue with the algorithm.
172 / 423
Steps in each iteration
In each iteration we have three steps:
• Variable in: Greedy Rule
• Variable Out: Ratio Test
• Tableau update: Pivot Operation
173 / 423
Example (continued)
• Variable in: There is only one variable with negative reduced
costs, that is x2. So we set j = 2 and put x2 in the basis (i.e.
x2 is the entering variable).
• Variable out: We conduct the ratio test using the right-hand
side column (RHS) and the column of the new basic variable
x2.
x2 . . . RHS Ratio
1 . . . 10 10
1 . . . 15 15
0 . . . 15 –
The minimum ratio is attained at the first row. We therefore
take out of the basis the basic variable associated with row 1,
that is, x3 is the leaving variable. We set i = 1.
174 / 423
Updating tableau: We have to conduct the pivot operation on
(i = 1, j = 2).
Old Tableau
BV x1 x2 x3 x4 x5 RHS
x3 0 1 1 0 −2 10
x4 0 1 0 1 −1 15
x1 1 0 0 0 1 15
z 0 −3 0 0 4 60
The pivot entry is the one in the x2-column and x3-row.
175 / 423
After pivoting on (i = 1, j = 2), we obtain:
New Tableau
BV x1 x2 x3 x4 x5 RHS
x2 0 1 1 0 −2 10
x4 0 0 −1 1 1 5
x1 1 0 0 0 1 15
z 0 0 3 0 −2 90
• The current basis consists of x2, x4 and x1.
• The current basic feasible solution is x = (15, 10, 0, 5, 0).
• The value of the objective function at this point is equal to
z = 90.
• Optimality Test: There is at least one non-basic variable with
negative reduced cost so we continue.
176 / 423
Example (continued)
BV x1 x2 x3 x4 x5 RHS
x2 0 1 1 0 −2 10
x4 0 0 −1 1 1 5
x1 1 0 0 0 1 15
z 0 0 3 0 −2 90
• Variable in: The variable with the most negative reduced cost
is x5, so we place x5 in the basis and set j = 5.
• Variable out: We conduct the ratio test on the column of x5.
x5 . . . RHS Ratio
−2 . . . 10 –
1 . . . 5 5
1 . . . 15 15
The minimum ratio is attained at i = 2. Thus we take out of
the basis the basic variable associated with row 2, that is x4.
We conduct the pivot operation on (i = 2, j = 5).
177 / 423
After the pivot operation on (i = 2, j = 5), we obtain:
New Tableau
BV x1 x2 x3 x4 x5 RHS
x2 0 1 -1 2 0 20
x5 0 0 −1 1 1 5
x1 1 0 1 −1 0 10
z 0 0 1 2 0 100
• The new basis consists of x2, x5 and x1.
• The new basic feasible solution is x = (10, 20, 0, 0, 5).
• The value of the objective function at this point is equal to
z = 100.
• Optimality Test: All the reduced costs are nonnegative, hence
we stop. The current solution is optimal. Stop!
178 / 423
An Important Note
When we stop, we have to do two things:
• Write down the optimal solution and optimal value of the
objective function, namely x∗ and z∗.
• Check these values.
◦ Are the constraints satisfied?
◦ Is the value of z∗ consistent with the value of x∗?
179 / 423
Example (continued)
From the final tableau we obtain the optimal solution
x∗ = (10, 20, 0, 0, 5)
and the optimal value of the objective function
z∗ = 100.
An optimal solution to the original problem (before introducing
the slack variables) is
(x∗1, x

2) = (10, 20).
Check objective value:
z∗ = 4x∗1 + 3x

2 = 4× 10 + 3× 20 = 100.
Check constraints:
2x∗1 + x

2 ≤ 40 (2× 10 + 1× 20 = 40, OK)
x∗1 + x

2 ≤ 30 (1× 10 + 1× 20 = 30, OK)
180 / 423
§7.8 – Complexity of the Simplex Algorithm
In theory the Simplex Algorithm is an exponential time algorithm.
This means that in the worst case the time taken to solve a
problem of size n can grow exponentially with n.
However, in almost all practical cases, the Simplex Algorithm does
not seem to perform anywhere nearly as badly as this worst case
bound. For example, the Simplex Algorithm has been used to solve
practical problems with tens of thousands of constraints and
hundreds of thousands of variables.
As mentioned earlier, there are polynomial time algorithms for
solving LP problems.
181 / 423
§7.9 – Example
Example 7.3
A water desalination plant bases its production on profit alone.
This plant can produce drinking grade ($60 per megalitre),
irrigation grade ($35 per megalitre) and industry grade ($20 per
megalitre) water for sale.
There are four filtration processes: electro-dialysis with capacity
(48 units per week), freezing (60 units per week), reverse osmosis
(16 units per week) and carbon (5 units per week).
A megalitre of drinking water requires 8, 12 and 4 units of the first
three and no carbon filtering. A megalitre of irrigation water
requires 6, 7, 3 and 1 unit respectively. A megalitre of industry
water requires only 1, 4 and 1 unit of the first three and no carbon
filtering.
182 / 423
Example (continued)
Let x1 be the number of megalitres of drinking water produced per
week,
Let x2 be the number of megalitres of irrigation water produced
per week
Let x3 be the number of megalitres of industry water produced per
week.
To maximise our weekly profit, we need to solve
max
x
z = 60x1 + 35x2 + 20x3
8x1 + 6x2 + x3 ≤ 48
12x1 + 7x2 + 4x3 ≤ 60
4x1 + 3x2 + x3 ≤ 16
x2 ≤ 5
x1, x2, x3 ≥ 0
183 / 423
Example (continued)
• Step 1: We add slack variables and construct the canonical
form. This yields the first basic feasible solution.
max
x
z = 60x1 + 35x2 + 20x3 + 0x4 + 0x5 + 0x6 + 0x7
8x1 + 6x2 + x3 + x4 = 48
12x1 + 7x2 + 4x3 + x5 = 60
4x1 + 3x2 + x3 + x6 = 16
x2 + x7 = 5
x1, x2, x3, x4, x5, x6, x7 ≥ 0
184 / 423
Example (continued)
• We rewrite this formulation as a Simplex Tableau.
BV x1 x2 x3 x4 x5 x6 x7 RHS
x4 8 6 1 1 0 0 0 48
x5 12 7 4 0 1 0 0 60
x6 4 3 1 0 0 1 0 16
x7 0 1 0 0 0 0 1 5
z −60 −35 −20 0 0 0 0 0
Initial basic feasible solution: (0, 0, 0, 48, 60, 16, 5).
185 / 423
Example (continued)
• Step 2: There are negative reduced costs, so we continue.
• Step 3: We select the non-basic variable with the most
negative reduced cost x1 as the new basic variable.
• Step 4: We conduct the ratio test on the column of the new
basic variable. Row 3 yields the minimum ratio so we take out
the basic variable x6 associated with row 3.
• Step 5: We perform the pivot operation on (i = 3, j = 1).
186 / 423
Example (continued)
Old Tableau
BV x1 x2 x3 x4 x5 x6 x7 RHS
x4 8 6 1 1 0 0 0 48
x5 12 7 4 0 1 0 0 60
x6 4 3 1 0 0 1 0 16
x7 0 1 0 0 0 0 1 5
z −60 −35 −20 0 0 0 0 0
187 / 423
Example (continued)
After performing the pivot operation on (i = 3, j = 1), we obtain:
New Tableau
BV x1 x2 x3 x4 x5 x6 x7 RHS
x4 0 0 −1 1 0 −2 0 16
x5 0 −2 1 0 1 −3 0 12
x1 1 3/4 1/4 0 0 1/4 0 4
x7 0 1 0 0 0 0 1 5
z 0 10 −5 0 0 15 0 240
188 / 423
Example (continued)
• Step 2: There are negative reduced costs, so we continue.
• Step 3: We select the non-basic variable with the most
negative reduced cost x3 as the new basic variable.
• Step 4: We conduct the ratio test on the column of the new
basic variable. Row 2 yields the minimum ratio so we take out
the basic variable x5 associated with row 2.
• Step 5: We perform the pivot operation on (i = 2, j = 3).
189 / 423
Old Tableau
BV x1 x2 x3 x4 x5 x6 x7 RHS
x4 0 0 −1 1 0 −2 0 16
x5 0 −2 1 0 1 −3 0 12
x1 1 3/4 1/4 0 0 1/4 0 4
x7 0 1 0 0 0 0 1 5
z 0 10 −5 0 0 15 0 240
190 / 423
After the pivot operation on (i = 2, j = 3), we obtain:
New Tableau
BV x1 x2 x3 x4 x5 x6 x7 RHS
x4 0 −2 0 1 1 −5 0 28
x3 0 −2 1 0 1 −3 0 12
x1 1 5/4 0 0 −1/4 1 0 1
x7 0 1 0 0 0 0 1 5
z 0 0 0 0 5 0 0 300
191 / 423
• Step 2: All the reduced costs are nonnegative. Thus, we Stop!
The current solution is optimal.
• Report: The optimal solution is
x∗ = (1, 0, 12, 28, 0, 0, 5)
and the optimal value of the objective function is equal to
z∗ = 300
• Don’t forget to check the results!
192 / 423
§8 – Solution Possibilities
We saw before that linear programming problems can have
• multiple optimal solutions,
• an unbounded objective function, or
• an empty feasible set.
We want to know how to recognise these situations when using the
Simplex Algorithm.
193 / 423
§8.1 – Multiple Optimal Solutions
If a non-basic variable xj in the final simplex tableau has a zero
reduced cost, then the corresponding linear programming problem
has multiple optimal solutions.
This follows because we can pivot on the column corresponding to
xj , thus bringing it into the basis and removing one of the
variables currently in the basis without changing the value of the
objective function.
If there are two optimal solutions, then there must be infinitely
many optimal solutions. Why?
194 / 423
Example
BV x1 x2 x3 x4 x5 RHS
x2 0 1 1 0 −2 10
x4 0 0 −1 1 1 5
x1 1 0 0 0 1 15
z 0 0 2 0 0 80
The current feasible solution x = (15, 10, 0, 5, 0) is optimal.
The non-basic variable x5 has a reduced cost equal to zero.
Pivoting on the (i = 2, j = 5)-entry, for example, we obtain:
BV x1 x2 x3 x4 x5 RHS
x2 0 1 −1 2 0 20
x5 0 0 −1 1 1 5
x1 1 0 1 −1 0 10
z 0 0 2 0 0 80
So x = (10, 20, 0, 0, 5) is another optimal solution.
Any point on the line segment between (15, 10, 0, 5, 0) and
(10, 20, 0, 0, 5) is an optimal solution.
195 / 423
§8.2 – Unbounded Problems (No Optimal Solution)
This refers to the situation when the objective function can take
arbitrarily large value (for a maximisation problem) or arbitrarily
small value (for a minimisation problem) in the feasible region.
Therefore no optimal solution exists.
In terms of the simplex tableau, this case occurs when the value of
the new basic variable can be increased indefinitely without causing
any one of the old basic variables to become negative.
In other words, this occurs when the column of the new basic
variable consists of non-positive elements only, that is, when the
Ratio Test fails to identify a variable to be taken out of the basis.
196 / 423
Example
BV x1 x2 x3 x4 x5 RHS
x5 0 −6 0 1 1 25
x1 1 −2 0 6 0 40
x3 0 0 1 1 0 10
z 0 −3 0 2 0 80
According to the Greedy Rule, x2 is the new basic variable.
If we conduct the Ratio Test on the x2 column we fail to find a
variable to be taken out of the basis because there is no positive
element in this column.
This means that x2 can be increased indefinitely. Thus the
problem is unbounded: it has no optimal solution.
197 / 423
§8.3 – No Feasible Solutions
Some linear programming problems do not have feasible solutions
(that is, the feasible region is empty, or equivalently the problem is
infeasible).
How does this show in the simplex tableau?
This is a very important issue. However, problems in standard form
always have feasible solutions, so the appropriate place to recognise
when there are no feasible solutions is in the conversion of a
problem to standard form. We shall discuss this in the next section.
198 / 423
§8.4 – Cycling and Bland’s Rule
Question: Is it possible that the simplex procedure we described
will never stop?
Answer: Yes!
Reason: If there is a change in the basis but not in the value of the
objective function, that is, a basic variable whose value is zero
leaves the basis, we can cycle indefinitely between the two
solutions.
Thus cycling is caused by degenerate basic feasible solutions. They
are basic feasible solutions in which at least one basic variable has
a value of zero.
199 / 423
Example
Table 1
BV x1 x2 x3 x4 x5 x6 x7 RHS
x5 1/2 −11/2 −5/2 9 1 0 0 0
x6 1/2 −3/2 −1/2 1 0 1 0 0
x7 1 0 0 0 0 0 1 1
z −10 57 9 24 0 0 0 0
200 / 423
Example (continued)
Table 2
BV x1 x2 x3 x4 x5 x6 x7 RHS
x1 1 -11 −5 18 2 0 0 0
x6 0 4 2 −8 −1 1 0 0
x7 0 11 5 −18 −2 0 1 1
z 0 −53 −41 204 20 0 0 0
201 / 423
Example (continued)
Table 3
BV x1 x2 x3 x4 x5 x6 x7 RHS
x1 1 0 1/2 −4 −3/4 11/4 0 0
x2 0 1 1/2 −2 −1/4 1/4 0 0
x7 0 0 −1/2 4 3/4 −11/4 1 1
z 0 0 −29/2 98 27/4 53/4 0 0
202 / 423
Example (continued)
Table 4
BV x1 x2 x3 x4 x5 x6 x7 RHS
x3 2 0 1 −8 −3/2 11/2 0 0
x2 −1 1 0 2 1/2 −5/2 0 0
x7 1 0 0 0 0 0 1 1
z 29 0 0 −18 −15 93 0 0
203 / 423
Example (continued)
Table 5
BV x1 x2 x3 x4 x5 x6 x7 RHS
x3 −2 4 1 0 1/2 −9/2 0 0
x4 −1/2 1/2 0 1 1/4 −5/4 0 0
x7 1 0 0 0 0 0 1 1
z 20 9 0 0 −21/2 141/2 0 0
204 / 423
Example (continued)
Table 6
BV x1 x2 x3 x4 x5 x6 x7 RHS
x5 −4 8 2 0 1 −9 0 0
x4 1/2 −3/2 −1/2 1 0 1 0 0
x7 1 0 0 0 0 0 1 1
z −22 93 21 0 0 -24 0 0
205 / 423
Example (continued)
Table 7 = Table 1
BV x1 x2 x3 x4 x5 x6 x7 RHS
x5 1/2 −11/2 −5/2 9 1 0 0 0
x6 1/2 −3/2 −1/2 1 0 1 0 0
x7 1 0 0 0 0 0 1 1
z −10 57 9 24 0 0 0 0
Table 7 = Table 1 ⇒ Table 8 = Table 2 ⇒ Table 9 = Table 3, ......
206 / 423
In practice, cycling is not a major issue because it occurs rarely. It
is, however, of theoretical interest.
The following simple rule can be used to prevent cycling.
Bland’s Rule (Smallest Subscript Rule)
• Among all variables with negative reduced costs, choose the
variable with the smallest subscript as the entering variable
(i.e. choose the leftmost negative reduced cost).
• In using the Ratio Test, if two or more variables compete for
leaving the basis (i.e. with the smallest ratio), choose the
variable with the smallest subscript as the leaving variable.
Theorem 8.1
When applying Bland’s Rule the Simplex Algorithm terminates in a
finite number of iterations.
In this course you are not required to follow Bland’s rule.
207 / 423
Example (continued, but using Bland’s Rule)
Table 6
BV x1 x2 x3 x4 x5 x6 x7 RHS
x5 −4 8 2 0 1 −9 0 0
x4 1/2 −3/2 −1/2 1 0 1 0 0
x7 1 0 0 0 0 0 1 1
z −22 93 21 0 0 -24 0 0
208 / 423
Example (continued, but using Bland’s Rule)
Table 7’
BV x1 x2 x3 x4 x5 x6 x7 RHS
x5 0 -4 -2 8 1 −1 0 0
x1 1 −3 −1 2 0 2 0 0
x7 0 3 1 -2 0 -2 1 1
z 0 27 -1 44 0 20 0 0
209 / 423
Example (continued, but using Bland’s Rule)
Table 8’
BV x1 x2 x3 x4 x5 x6 x7 RHS
x5 0 2 0 4 1 −5 2 2
x1 1 0 0 0 0 0 1 1
x3 0 3 1 -2 0 -2 1 1
z 0 30 0 42 0 18 1 1
Success!
210 / 423
§9 – Non-Standard Formulations
Recall that the Simplex Algorithm relies very much on the
Canonical Form, which, in turn, requires the correct Standard
Form, that is,
• the optimisation criterion is opt = max,
• all RHS coefficients are nonnegative,
• all functional constraints are “≤” type inequalities (which
ensures m slack variables in the initial tableau and so the
initial bfs), and
• all variables are nonnegative.
Not all programs can be written in Standard Form. So extra steps
are needed to convert such programs to the Canonical Form.
211 / 423
Standard Form
max z =
n#
j=1
cjxj
a11x1 + a12x2 + ...+ a1nxn ≤ b1
a21x1 + a22x2 + ...+ a2nxn ≤ b2
...
...
...
an1x1 + an2x2 + ...+ annxn ≤ bm
xj ≥ 0, j = 1, . . . , n
where all bi ≥ 0.
212 / 423
Non-Standard Formulations
There are 5 scenarios we must deal with:
• we have a minimisation problem;
• a variable is unrestricted in sign (urs);
• some part of b = (b1, . . . , bm) is negative;
• we have a “≥” constraint;
• we have a “=” constraint.
213 / 423
An Example
min z = x1 − x2
x1 + x2 ≤ −5
x1 ≥ −10
x2 = 3
x1 ≥ 0,
x2 ∈ R
Here “x2 ∈ R” indicates that x2 is unrestricted in sign.
214 / 423
§9.1 – Minimisation Problems
There are two ways to deal with minimisation problems:
1. Convert the problem to a maximisation problem by
multiplying the objective function by −1.
2. Change the Greedy Rule and the Optimality Criterion a tiny
bit.
215 / 423
Minimisation Problems – Method 1
Observe that the problem
min
x
f(x)
is equivalent to the problem
max
x
−f(x)
in that both have the same set of optimal solutions.
Also, if the minimum value of f(x) is zmin and the maximum
value of −f(x) is zmax, then zmin = −zmax.
So, an easy way to deal with a minimisation problem, is to multiply
the coefficients of the objective function by −1 and maximize the
new objective function.
However, you have to remember that the solution you obtain is the
negative of the solution you want!!!
216 / 423
Example – Method 1
min z = x1 − x2
x1 + x2 ≤ 5
x1 ≤ 10
x2 ≤ 3
x1, x2 ≥ 0
217 / 423
Example – Method 1
For method 1, we multiply the objective function by −1 to obtain
a maximisation problem:
max zˆ = −x1 + x2
x1 + x2 ≤ 5
x1 ≤ 10
x2 ≤ 3
x1, x2 ≥ 0
218 / 423
Example – Method 1
Initial tableau for the maximisation problem:
BV x1 x2 x3 x4 x5 RHS
x3 1 1 1 0 0 5
x4 1 0 0 1 0 10
x5 0 1 0 0 1 3
zˆ 1 −1 0 0 0 0
The entering variable is x2, and the leaving variable is x5.
219 / 423
Example – Method 1
BV x1 x2 x3 x4 x5 RHS
x3 1 0 1 0 −1 2
x4 1 0 0 1 0 10
x2 0 1 0 0 1 3
zˆ 1 0 0 0 1 3
After one pivot we see that there are no negative reduced costs.
Therefore the optimality criterion is satisfied.
220 / 423
Example – Method 1
The optimal solution is
x = (0, 3, 5, 10, 0).
The optimal solution for the original problem (ignoring the slack
variables) is
(x1, x2) = (0, 3).
The optimal value of the modified objective function is
zˆ = 3.
Thus the optimal value of the original objective function is
z = −3.
221 / 423
Minimisation Problems – Method 2
Another option is to work on the minimisation problem but change
the Simplex Algorithm a bit. This is not hard to do – we just have
to change our approach to the reduced costs.
For a minimisation LP problem,
• the Optimality Criterion is that we stop if all the reduced
costs are nonpositive,
• the Greedy Rule is that we choose the column with the most
positive reduced cost, and
• the Ratio Test remains the same.
222 / 423
Example – Method 2
min z = x1 − x2
x1 + x2 ≤ 5
x1 ≤ 10
x2 ≤ 3
x1, x2 ≥ 0

BV x1 x2 x3 x4 x5 RHS
x3 1 1 1 0 0 5
x4 1 0 0 1 0 10
x5 0 1 0 0 1 3
z −1 1 0 0 0 0
We select x2 to enter the basis and x5 to leave. Will the choices in
pivots always be the same for methods 1 and 2?
223 / 423
Example – Method 2
BV x1 x2 x3 x4 x5 RHS
x3 1 0 1 0 −1 2
x4 1 0 0 1 0 10
x2 0 1 0 0 1 3
z −1 0 0 0 −1 −3
We can see that there are no nonnegative reduced costs, so the
optimality criterion is satisfied. The optimal solution is
(0, 3, 2, 10, 0), with optimal value z∗ = −3.
The optimal solution for the problem, ignoring the slack variables,
is
(x1, x2) = (0, 3).
224 / 423
§9.2 – Negative RHS
We convert a negative RHS to a non-negative RHS by multiplying
the respective constraint by −1.
Remember when doing so we have to reverse the inequality sign.
That is, change ≤ to ≥ and vice versa.
225 / 423
Example (revisited)
min z = x1 − x2
x1 + x2 ≤ −5
x1 ≥ −10
x2 = 3
x1 ≥ 0,
x2 ∈ R
226 / 423
Example (continued)
Convert to a maximisation problem.
And then make the RHS coefficients nonnegative.
max zˆ = −x1 + x2
−x1 − x2 ≥ 5
−x1 ≤ 10
x2 = 3
x1 ≥ 0,
x2 ∈ R
Observe that in fixing the negative RHS we have created another
problem! The first constraint is of type “≥”.
227 / 423
§9.3 – Unrestricted Variables
What do we do when some variable xj is unrestricted in sign? We
address this by using the fact that any number (positive or
negative) can be expressed as the difference of two positive
numbers.
Thus, if xj is unrestricted in sign, we can introduce two additional
variables, say x
(1)
j and x
(2)
j , and set xj = x
(1)
j − x(2)j with
x
(1)
j , x
(2)
j ≥ 0.
Clearly, if x
(1)
j > x
(2)
j then xj > 0; if x
(1)
j < x
(2)
j then xj < 0; and
if x
(1)
j = x
(2)
j then xj = 0.
Thus, xj is indeed unrestricted in sign (which we write as xj ∈ R).
228 / 423
Example (continued)
Let x2 = x3 − x4, where x3, x4 ≥ 0. Then
max zˆ = −x1 + x3 − x4
−x1 − x3 + x4 ≥ 5
−x1 ≤ 10
x3 − x4 = 3
x1, x3, x4 ≥ 0
229 / 423
§9.4 – ≥ Constraints
When we convert from standard form to canonical form we
introduce slack variables to get from inequality (≤) to equality
(=). We use a similar idea for the ≥ constraints.
We convert a “≥” constraint to an “=” constraint by introducing
a surplus variable. These are similar to slack variables except they
are subtracted at the beginning.
230 / 423
Example (continued)
The constraint
−x1 − x3 + x4 ≥ 5
can be written as
−x1 − x3 + x4 − x5 = 5
where
x5 ≥ 0
is the surplus variable.
Now we obtain:
max zˆ = −x1 + x3 − x4
−x1 − x3 + x4 − x5 = 5
−x1 ≤ 10
x3 − x4 = 3
x1, x3, x4, x5 ≥ 0
231 / 423
§9.5 – Equality Constraints
When we have equality constraints the strategy that we will adopt
involves adding yet more variables to our program: artificial
variables.
We call the new variables ‘artificial’ because they are used
temporarily, and ultimately will become zero if our original program
is feasible.
232 / 423
Example (continued)
−x1 − x3 + x4 − x5 = 5
can be rewritten as
−x1 − x3 + x4 − x5 + y1 = 5
where y1 ≥ 0 is an artificial variable.
• The second equation is equivalent to the first if and only if y1
is equal to zero.
• We try to force this to happen by getting the artificial variable
out of the basis.
Similarly, by introducing an artificial variable y2 ≥ 0, x3 − x4 = 3
can be written as
x3 − x4 + y2 = 3
233 / 423
Example (continued)
We now have:
max zˆ = −x1 + x3 − x4
−x1 − x3 + x4 − x5 + y1 = 5
−x1 ≤ 10
x3 − x4 + y2 = 3
x1, x3, x4, x5, y1, y2 ≥ 0
234 / 423
Example (continued)
As usual, for a “≤” type constraint, we introduce a slack variable.
Introducing a slack variable x6 for the second constraint, we obtain:
max zˆ = −x1 + x3 − x4
−x1 − x3 + x4 − x5 + y1 = 5
−x1 + x6 = 10
x3 − x4 + y2 = 3
x1, x3, x4, x5, x6, y1, y2 ≥ 0
We are now in canonical form with basic variables y1, x6, y2.
235 / 423
§9.6 – Summary: procedure for transforming any LP
problem to canonical form
1. For a minimisation problem, convert it to a maximisation
problem by multiplying the objective function by −1
2. For each negative RHS coefficient bi < 0, multiply the
corresponding functional constraint by −1 (and change the
direction of the inequality)
3. For each variable xj which is unrestricted in sign, introduce
new variables x
(1)
j , x
(2)
j ≥ 0 and replace xj by x(1)j − x(2)j
4. For each “≥” functional constraint, introduce a surplus
variable so as to obtain a “=” constraint
5. For each “=” functional constraint, introduce an artificial
variable
6. For each “≤” functional constraint, introduce a slack variable
7. Now the problem is in canonical form whose basic variables
are the slack and artificial variables
236 / 423
Example (top to toe)
Original problem:
min z = x1 − x2
x1 + x2 ≤ −5
x1 ≥ −10
x2 = 3
x1 ≥ 0,
x2 ∈ R
237 / 423
Example (top to toe)
Convert to a maximisation problem:
max zˆ = −x1 + x2
x1 + x2 ≤ −5
x1 ≥ −10
x2 = 3
x1 ≥ 0,
x2 ∈ R
238 / 423
Example (top to toe)
Make all RHS coefficients nonnegative:
max zˆ = −x1 + x2
−x1 − x2 ≥ 5
−x1 ≤ 10
x2 = 3
x1 ≥ 0,
x2 ∈ R
239 / 423
Example (top to toe)
Introduce two variables x3, x4 ≥ 0 and replace the urs variable x2
by x3 − x4:
max zˆ = −x1 + x3 − x4
−x1 − x3 + x4 ≥ 5
−x1 ≤ 10
x3 − x4 = 3
x1, x3, x4 ≥ 0
240 / 423
Example (top to toe)
Introduce the surplus variable x5 for the first functional constraint:
max zˆ = −x1 + x3 − x4
−x1 − x3 + x4 − x5 = 5
−x1 ≤ 10
x3 − x4 = 3
x1, x3, x4, x5 ≥ 0
241 / 423
Example (top to toe)
Introduce the artificial variables y1, y2 for the two “=” constraints:
max zˆ = −x1 + x3 − x4
−x1 − x3 + x4 − x5 + y1 = 5
−x1 ≤ 10
x3 − x4 + y2 = 3
x1, x3, x4, x5, y1, y2 ≥ 0
242 / 423
Example (top to toe)
Introduce the slack variable x6 for the “≤” constraint:
max zˆ = −x1 + x3 − x4
−x1 − x3 + x4 − x5 + y1 = 5
−x1 + x6 = 10
x3 − x4 + y2 = 3
x1, x3, x4, x5, x6, y1, y2 ≥ 0
We are now in canonical form with basic variables y1, x6, y2.
Our (initial) basic feasible solution here is
(x1, x3, x4, x5, x6, y1, y2) = (0, 0, 0, 0, 10, 5, 3)
243 / 423
Example (top to toe)
Observe that the initial basis y1, x6, y2 consists of:
• slack variables that arise from ‘less than or equal to
constraints’, and
• artificial variables that arise from ‘greater than or equal to
constraints’ or equality constraints.
Ignoring the values of the slack, surplus and artificial variables in
(0, 0, 0, 0, 10, 5, 3), we obtain (x1, x3, x4) = (0, 0, 0).
But is this a basic feasible solution to the original problem? No!
Why?
244 / 423
Initialization Revisited
A solution to the transformed system gives a feasible solution to
the original system if and only if all artificial variables are equal to
zero.
How can we ensure that the artificial variable stays out of the
basis?
Two methods can be used to achieve this:
• The two-phase Simplex Algorithm
• The Big M method (which is essentially equivalent, but not
covered in this subject)
245 / 423
§9.7 – The Two-phase Method
The two-phase method consists of
• Phase 1, which drives out the artificial variables by finding a
basic feasible solution for which the artificial variables are
non-basic and have value zero, and
• Phase 2, which starts from this basic feasible solution
(ignoring the artificial variables) and produces an optimal
solution.
246 / 423
Phase 1
Let
w = sum of the artificial variables
and
w∗ = minimum value of w subject to the constraints.
Because the artificial variables must satisfy the nonnegativity
constraint, w∗ = 0 if and only if all the artificial variables are equal
to zero. Thus,
• the goal in Phase 1 is to minimize w.
247 / 423
Example
max z = −3x1 − 5x2
x1 + x4 = 4
2x2 + y1 = 12
3x1 + 2x2 − x3 + y2 = 18
x1, x2, x3, x4, y1, y2 ≥ 0
where y1 and y2 are artificial variables.
248 / 423
Phase 1
minw = y1 + y2
x1 + x4 = 4
2x2 + y1 = 12
3x1 + 2x2 − x3 + y2 = 18
x1, x2, x3, x4, y1, y2 ≥ 0
249 / 423
We use the ‘minimise’ version of the simplex method to achieve
this.
BV x1 x2 x3 x4 y1 y2 RHS
x4 1 0 0 1 0 0 4
y1 0 2 0 0 1 0 12
y2 3 2 −1 0 0 1 18
w 0 0 0 0 −1 −1 0
Note that this tableau is not in canonical form, because there are
non-zero coefficients in the w-row corresponding to the basic
variables. To restore the canonical form, we add Row 2 and Row 3
to the w-row.
250 / 423
New Tableau in Canonical Form
BV x1 x2 x3 x4 y1 y2 RHS
x4 1 0 0 1 0 0 4
y1 0 2 0 0 1 0 12
y2 3 2 −1 0 0 1 18
w 3 4 −1 0 0 0 30
251 / 423
Tableau after pivoting on (i = 2, j = 2)
BV x1 x2 x3 x4 y1 y2 RHS
x4 1 0 0 1 0 0 4
x2 0 1 0 0 1/2 0 6
y2 3 0 −1 0 −1 1 6
w 3 0 −1 0 −2 0 6
252 / 423
Tableau after pivoting on (i = 3, j = 1)
BV x1 x2 x3 x4 y1 y2 RHS
x4 0 0 1/3 1 1/3 −1/3 2
x2 0 1 0 0 1/2 0 6
x1 1 0 −1/3 0 −1/3 1/3 2
w 0 0 0 0 −1 −1 0
• All the artificial variables are out of the basis.
• The minimum value of w is w∗ = 0.
• This is the end of Phase 1.
• Note that we now have a basic feasible solution for the
original problem, obtained by ignoring the y1 and y2-columns.
253 / 423
Phase 2
We now put back the original objective function
z = −3x1 − 5x2
Tableau for Phase 2.
BV x1 x2 x3 x4 RHS
x4 0 0 1/3 1 2
x2 0 1 0 0 6
x1 1 0 −1/3 0 2
z 3 5 0 0 0
Note that this tableau is not in canonical form.
To restore canonical form we add (−3)× Row 3 and (−5)× Row 2
to the z-row.
254 / 423
After these row operations we obtain:
Tableau for Phase 2 in Canonial Form
BV x1 x2 x3 x4 RHS
x4 0 0 1/3 1 2
x2 0 1 0 0 6
x1 1 0 −1/3 0 2
z 0 0 1 0 −36
This tableau satisfies the Optimality Criterion. So the
corresponding solution is already optimal.
The optimal solution is
x = (2, 6, 0, 2)
and the optimal value of the objective function is
z = −36.
255 / 423
The example above is atypical in the sense that no iteration is
needed in Phase 2.
In general, if the initial tableau (in canonical form) for Phase 2
does not meet the Optimality Criterion, then we need to run the
Simplex Algorithm beginning with this tableau until we find an
optimal solution for the original problem or discover that the
problem is unbounded.
256 / 423
Another example
max z = 80x1 + 60x2 + 42x3
2x1 + 3x2 + x3 ≤ 12
5x1 + 6x2 + 3x3 ≥ 15
2x1 − 3x2 + x3 = 8
x1, x2, x3 ≥ 0
257 / 423
Another example (continued)
This is equivalent to
max z = 80x1 + 60x2 + 42x3
2x1 + 3x2 + x3 + x4 = 12
5x1 + 6x2 + 3x3 − x5 + y1 = 15
2x1 − 3x2 + x3 + y2 = 8
x1, x2, x3, x4, x5, y1, y2 ≥ 0
where x4 is a slack variable, x5 is a surplus variable and y1 and y2
are artificial variables.
258 / 423
Another example (continued)
Phase 1
minw = y1 + y2
2x1 + 3x2 + x3 + x4 = 12
5x1 + 6x2 + 3x3 − x5 + y1 = 15
2x1 − 3x2 + x3 + y2 = 8
x1, x2, x3, x4, x5, y1, y2 ≥ 0
259 / 423
Another example (continued)
Phase 1
Initial tableau (not in canonical form)
x1 x2 x3 x4 x5 y1 y2 RHS
x4 2 3 1 1 0 0 0 12
y1 5 6 3 0 −1 1 0 15
y2 2 −3 1 0 0 0 1 8
w 0 0 0 0 0 −1 −1 0
Restore the canonical form by adding Rows 2 and 3 to the w-row.
260 / 423
Another example (continued)
Phase 1
Initial tableau in canonical form
x1 x2 x3 x4 x5 y1 y2 RHS
x4 2 3 1 1 0 0 0 12
y1 5 6 3 0 −1 1 0 15
y2 2 −3 1 0 0 0 1 8
w 7 3 4 0 −1 0 0 23
261 / 423
Another example (continued)
Phase 1
Tableau after pivoting on (i = 2, j = 1)
x1 x2 x3 x4 x5 y1 y2 RHS
x4 0 3/5 −1/5 1 2/5 −2/5 0 6
x1 1 6/5 3/5 0 −1/5 1/5 0 3
y2 0 −27/5 −1/5 0 2/5 −2/5 1 2
w 0 −27/5 −1/5 0 2/5 −7/5 0 2
262 / 423
Another example (continued)
Phase 1
Tableau after pivoting on (i = 3, j = 5)
x1 x2 x3 x4 x5 y1 y2 RHS
x4 0 6 0 1 0 0 −1 4
x1 1 −3/2 1/2 0 0 0 1/2 4
x5 0 −27/2 −1/2 0 1 −1 5/2 5
w 0 0 0 0 0 −1 −1 0
The optimal value is w∗ = 0, and the artificial variables y1 and y2
have been driven out of the basis.
This is the end of Phase 1.
263 / 423
Another example (continued)
Phase 2
Initial tableau for Phase 2 (not in canonical form)
x1 x2 x3 x4 x5 RHS
x4 0 6 0 1 0 4
x1 1 −3/2 1/2 0 0 4
x5 0 −27/2 −1/2 0 1 5
z −80 −60 −42 0 0 0
Restore the canonical form by adding 80 times Row 2 to the z-row.
264 / 423
Another example (continued)
Phase 2
Initial tableau for Phase 2 in canonical form
x1 x2 x3 x4 x5 RHS
x4 0 6 0 1 0 4
x1 1 −3/2 1/2 0 0 4
x5 0 −27/2 −1/2 0 1 5
z 0 −180 −2 0 0 320
265 / 423
Another example (continued)
Phase 2
Tableau after pivoting on (i = 1, j = 2)
x1 x2 x3 x4 x5 RHS
x2 0 1 0 1/6 0 2/3
x1 1 0 1/2 1/4 0 5
x5 0 0 −1/2 9/4 1 14
z 0 0 −2 30 0 440
266 / 423
Another example (continued)
Phase 2
Tableau after pivoting on (i = 2, j = 3)
x1 x2 x3 x4 x5 RHS
x2 0 1 0 1/6 0 2/3
x3 2 0 1 1/2 0 10
x5 1 0 0 5/2 1 19
z 4 0 0 31 0 460
This is the end of Phase 2.
Report:
The optimal solution is (x1, x2, x3, x4, x5) = (0, 2/3, 10, 0, 19)
The optimal value of the objective function is z∗ = 460.
The optimal solution to the original problem is
(x1, x2, x3) = (0, 2/3, 10) with optimal value z
∗ = 460.
267 / 423
Complications of the two-phase method
Let w∗ be the optimal value of w obtained in Phase 1.
• Case 1: If w∗ = 0 and all the artificial variables are non-basic,
then we have a basic feasible solution to the original problem.
Continue with Phase 2.
• Case 2: If w∗ = 0 but at least one artificial variable is in the
basis, then there are two possible cases
◦ Case 2a: we can use pivot operations to take all artificial
variables out of the basis and then continue with Phase 2.
◦ Case 2b: the constraint corresponding to the zero artificial
variable is redundant.
• Case 3: If w∗ > 0, the problem is infeasible (that is, the
feasible region is empty).
268 / 423
Case 2a: An example
Suppose we get the following tableau in Phase 1, where y1 and y2
are artificial variables.
x1 x2 x3 x4 y1 y2 RHS
x4 0 11 6 1 0 0 2
y1 0 −1/2 −1 0 1 −1 0
x1 1 −1 −1 0 0 1 3
w 0 1/2 1 0 0 2 0
We have w∗ = 0 but y1 remains as a basic variable. Note that
there exist nonzero entries in the y1-row and the columns of
non-artificial variables, i.e. −1/2 and −1. So we can pivot on one
of these entries to drive y1 out of the basis.
269 / 423
Case 2a: An example (continued)
Pivoting on −1/2 yields
x1 x2 x3 x4 y1 y2 RHS
x4 0 0 −16 1 22 −22 2
x2 0 1 2 0 −2 2 0
x1 1 0 1 0 −2 3 3
w 0 0 0 0 1 1 0
(You can choose to pivot on −1 in the previous tableau.)
Now we have w∗ = 0 and all artificial variables are non-basic. So
we can proceed to Phase 2 just as in Case 1.
270 / 423
Case 2b: An example
Suppose we get the following tableau in Phase 1, where y1, y2 and
y3 are artificial variables.
x1 x2 x3 x4 y1 y2 y3 RHS
x2 0 1 −2 2 3 0 −1 4
y2 0 0 0 0 1 1 −1 0
x1 1 0 6 −10 −11 0 5 28
w 0 0 0 0 1 0 3 0
We have w∗ = 0 but y2 remains as a basic variable. Unlike the
previous example, all entries in the y2-row and the columns of
non-artificial variables are zero. So we cannot pivot on such entries
to drive y2 out of the basis.
The constraint containing y2 is redundant. So we can ignore the
y2-row and continue with Phase 2.
271 / 423
§9.8 – Summary: procedure for solving LP problems by
using the Simplex Algorithm and the Two-phase Method
1. If the given LP problem is not in canonical form, convert it to
canonical form by following the “procedure for transforming
any LP problem to canonical form” (you may need to
introduce slack variables, surplus variables and/or artificial
variables).
2. If no artificial variable is introduced in the transformation to
canonical form, apply the Simplex Algorithm to the canonical
form directly.
3. If artificial variables are introduced, apply the two-phase
Method.
272 / 423
§10 – Duality Theory
Duality is a very elegant and important concept within the field of
operations research, just as in many other branches of
mathematics.
In operations research, the theory of duality was first developed in
relation to linear programming, but it has many applications, and
perhaps even a more natural and intuitive interpretation, in several
related areas such as nonlinear programming, network optimisation
and game theory.
273 / 423
Every linear program has associated with it a related linear
program called its dual. When we are talking about the dual we
call the original problem the primal.
274 / 423
§10.1 – Motivating Examples
Example 10.1
A small company is being set up to engage in the production of
office furniture. The company proposes to manufacture tables,
desks and chairs. The production of a table requires 8 kgs of wood
and 5 kgs of metal and is sold for $80, a desk uses 6 kgs of wood
and 4 kgs of metal and is sold for $60, and a chair requires 4 kgs
of both metal and wood and is sold for $50.
The company CEO has engaged you to determine the best
production strategy for this company, given that only 100 kgs of
wood and 60 kgs of metal are available each week.
275 / 423
Example (continued)
Let x1 be the number of tables manufactured each week, x2 be the
number of desks manufactured each week and x3 be the number
of chairs manufactured each week. Note that x1, x2 and x3 have
to be nonnegative.
Assuming we can immediately sell the furniture we produce, the
weekly income in dollars is 80x1 + 60x2 + 50x3.
The values of x1, x2 and x3 are constrained by the availability of
wood and metal.
The number of kgs of wood used is 8x1 + 6x2 + 4x3. This has to
be less than or equal to 100. The number of kgs of metal used is
5x1 + 4x2 + 4x3. This has to be less than or equal to 60.
276 / 423
Example (continued)
This leads to the linear program
max
x
z = 80x1 + 60x2 + 50x3
subject to
8x1 + 6x2 + 4x3 ≤ 100
5x1 + 4x2 + 4x3 ≤ 60
x1, x2, x3 ≥ 0
(x1, x2, x3 ∈ Z).
277 / 423
Example (continued)
Now let’s extend the situation further. Assume that there is a
much bigger company which has been the lone producer of this
type of furniture for many years. Its management does not
appreciate the competition from the new company. This bigger
company has decided to attempt to put the new company out of
business by buying up the new company’s resources of wood and
metal.
The problem for the large company is to decide on the prices that
it should offer for the wood and metal.
How should the company make this decision?
278 / 423
Example (continued)
Let y1 be the price, in dollars, offered for a kg of wood and y2 be
the price, in dollars, offered for a kg of metal. Clearly y1 and y2
have to be nonnegative.
Then the total expense of the buyout is 100y1 + 60y2. The
company obviously wants to minimise this.
What are the constraints on y1 and y2?
279 / 423
Example (continued)
It takes 8 kgs of wood and 5 kgs of metal to make a table. It
would cost the large company 8y1 + 5y2 to buy this 8 kgs of wood
and 5 kgs of metal.
On the other hand, the small company makes $80 from selling the
table. If 8y1 + 5y2 is less than this amount, then the small
company has the potential to come back to the resource supplier
with a better offer, even if it manufactures only tables. Thus we
need 8y1 + 5y2 ≥ 80.
Similarly we have 6y1 + 4y2 ≥ 60 and 4y1 + 4y2 ≥ 50.
280 / 423
Example (continued)
This leads to
The Dual Problem
min
y
w = 100y1 + 60y2
subject to
8y1 + 5y2 ≥ 80
6y1 + 4y2 ≥ 60
4y1 + 4y2 ≥ 50
y1, y2 ≥ 0.
281 / 423
Example 10.2
An individual has a choice of two types of food to eat, meat and
potatoes, each offering varying degrees of nutritional benefit. He
has been warned by his doctor that he must receive at least 400
units of protein, 200 units of carbohydrates and 100 units of fat
from his daily diet.
Given that a kg of steak costs $10 and provides 80 units of protein,
20 units of carbohydrates and 30 units of fat, and that a kg of
potatoes costs $2 and provides 40 units of protein, 50 units of
carbohydrates and 20 units of fat, he would like to find the
minimum cost diet which satisfies his nutritional requirements.
282 / 423
Example (continued)
Let x1 be the number of kgs of steak that the man buys and x2 be
the number of kgs of potatoes that he buys. These have to be
nonnegative.
The man wants to minimise 10x1 + 2x2, subject to
80x1 + 40x2 ≥ 400
20x1 + 50x2 ≥ 200
and
30x1 + 20x2 ≥ 100.
283 / 423
Example (continued)
The Primal Problem
min
x
z = 10x1 + 2x2
subject to
80x1 + 40x2 ≥ 400
20x1 + 50x2 ≥ 200
30x1 + 20x2 ≥ 100
x1, x2 ≥ 0.
284 / 423
Example (continued)
A chemical company hopes to attract this man away from his
present diet by offering him synthetic nutrients in the form of pills.
The problem for the company is to determine the prices per unit
for their synthetic nutrients.
The company wants to generate the highest possible revenue from
the transaction, but needs to keep the prices low enough so that
the man can’t satisfy his requirements from the natural foods in a
cheaper way.
285 / 423
Example (continued)
Let y1 be the price, in dollars, offered for a unit of protein, y2 be
the price, in dollars, offered for a unit of carbohydrate, and y3 be
the price, in dollars, offered for a unit of fat.
Then the man will have to pay 400y1 + 200y2 + 100y3 to satisfy
his dietary requirements. The company wants to maximise this.
If 80y1 + 20y2 + 30y3 > 10, then the man could do better by
buying just steak, and if 40y1 + 50y2 + 20y3 > 2, then the man
could do better by buying just potatoes. Thus we need
80y1 + 20y2 + 30y3 ≤ 10
and
40y1 + 50y2 + 20y3 ≤ 2.
286 / 423
Example (continued)
This leads to
The Dual Problem
max
y
w = 400y1 + 200y2 + 100y3
subject to
80y1 + 20y2 + 30y3 ≤ 10
40y1 + 50y2 + 20y3 ≤ 2
y1, y2, y3 ≥ 0.
287 / 423
Comments
Each of the two examples describes some kind of competition
between two decision makers. The primal/dual relationship is often
interpreted in the context of economics and game theory.
288 / 423
§10.2 – The Dual of a Linear Programming Problem in
Standard Form
In this section we formalise our notions about the relationship
between the primal and dual versions of linear programs.
We start by defining the relationship between a primal linear
program in standard form and its dual.
289 / 423
A Primal Problem
max
x
z =
n#
j=1
cjxj
a11x1 + a12x2 + ...+ a1nxn ≤ b1
a21x1 + a22x2 + ...+ a2nxn ≤ b2
...
...
...
am1x1 + am2x2 + ...+ amnxn ≤ bm
x1, x2, ..., xn ≥ 0.
Note: we do not require all bi to be nonnegative.
290 / 423
The Corresponding Dual Problem
min
y
w =
m#
i=1
biyi
a11y1 + a21y2 + ...+ am1ym ≥ c1
a12y1 + a22y2 + ...+ am2ym ≥ c2
...
...
...
a1ny1 + a2ny2 + ...+ amnym ≥ cn
y1, y2, ..., ym ≥ 0.
291 / 423
It is convenient to express a linear program and its dual using
matrix notation.
Primal LP
max
x
z = cx
Ax ≤ b
x ≥ 0
Dual LP
min
y
w = yb
yA ≥ c
y ≥ 0
A = (aij)m×n; b =
$%& b1...
bm
'() ; c = (c1, . . . , cn);x =
$%&x1...
xn
'() ; y = (y1, . . . , ym)
Once again, in this formulation we don’t impose the restriction
that b has to be nonnegative.
292 / 423
Example 10.3
The Primal Problem
max
x
z = 5x1 + 3x2 − 8x3 + 12x5
3x1 − 8x2 + 9x4 − 15x5 ≤ 20
18x1 + 5x2 − 8x3 + 4x4 + 12x5 ≤ 30
x1, x2, . . . , x5 ≥ 0.
293 / 423
Example (continued)
The Dual Problem
min
y
w = 20y1 + 30y2
3y1 + 18y2 ≥ 5
−8y1 + 5y2 ≥ 3
−8y2 ≥ −8
9y1 + 4y2 ≥ 0
−15y1 + 12y2 ≥ 12
y1, y2 ≥ 0.
294 / 423
Example 10.4
The Primal Problem
max
x
z = 4x1 + 10x2 − 9x3
5x1 − 18x2 + 5x3 ≤ 15
−8x1 + 12x2 − 8x3 ≤ 8
12x1 − 4x2 + 8x3 ≤ 10
2x1 − 5x3 ≤ 5
x1, x2, x3 ≥ 0.
295 / 423
Example (continued)
The Dual Problem
min
y
w = 15y1 + 8y2 + 10y3 + 5y4
5y1 − 8y2 + 12y3 + 2y4 ≥ 4
−18y1 + 12y2 − 4y3 ≥ 10
5y1 − 8y2 + 8y3 − 5y4 ≥ −9
y1, y2, y3, y4 ≥ 0.
296 / 423
The Dual of the Dual
Let us think about constructing the dual of the dual of an LP
problem in standard form. That is, what is the dual of
min
y
w = yb
subject to
yA ≥ c
y ≥ 0
297 / 423
To find the second dual, we first transform to equivalent
non-standard form:
max
y
−w = −yb
−yA ≤ −c
y ≥ 0
i.e.
max
y
−w = −bT yT
−AT yT ≤ −cT
yT ≥ 0
Now we take this as the primal problem in standard form.
298 / 423
We can easily apply our rules for constructing the dual to obtain:
min
x
−z = −xT cT
−xTAT ≥ −bT
xT ≥ 0
min
x
−z = −cx
−Ax ≥ −b
x ≥ 0
Notice that this is the same as
max
x
z = cx
Ax ≤ b
x ≥ 0
The dual of the dual is the primal.
299 / 423
We can easily apply our rules for constructing the dual to obtain:
min
x
−z = −xT cT
−xTAT ≥ −bT
xT ≥ 0
min
x
−z = −cx
−Ax ≥ −b
x ≥ 0
Notice that this is the same as
max
x
z = cx
Ax ≤ b
x ≥ 0
The dual of the dual is the primal.
300 / 423
§10.3 – The Dual of a Linear Programming Problem in
Non-standard Form
What happens when the primal problem is not in standard form?
That is, what happens if we have:
• ≥ constraints;
• equality constraints;
• a min objective and/or
• unrestricted variables?
There are some steps we need to take to get to the dual (although
when you have practised this a bit you will be able to skim over
the steps).
301 / 423
Our objective in this subsection is to ‘standardise’ the primal so
that we can obtain the dual easily.
We are not interested in solving the primal or the dual at this stage.
The approach here is similar to the one that we used when we
dealt with non-standard formulations in the context of the simplex
method.
However, as we are not trying to solve the problem (at this stage)
we are not interested in slack, surplus or artificial variables.
We also allow the RHS to be negative.
302 / 423
≥ constraints
≥ constraints are easy to deal with, as we allow the RHS to be
negative (at this stage).
So we can turn these constraints into ≤ constraints simply by
multiplying through by −1 (which flips the inequality).
n#
j=1
aijxj ≥ bi,
can be written as

n#
j=1
aijxj ≤ −bi.
303 / 423
Equality Constraints
Equality constraints require a few extra steps. First we observe
that every equality constraint can be expressed as the intersection
of two inequality constraints.
So
n#
j=1
aijxj = bi
is rewritten as
n#
j=1
aijxj ≤ bi
and
n#
j=1
aijxj ≥ bi.
304 / 423
Notice how we gained a “≥” constraint.
This, in turn, can be rewritten as
n#
j=1
aijxj ≤ bi
and

n#
j=1
aijxj ≤ −bi.
So we end up with two “≤” constraints and we are happy.
305 / 423
Unrestricted Variables and min Objective
We replace the unrestricted variable by the difference of two
non-negative variables.
For example, we replace x3 ∈ R with x3 = x(1)3 − x(2)3 where
x
(1)
3 , x
(2)
3 ≥ 0.
We replace min f(x) with max−f(x).
306 / 423
Example 10.5
Primal (non-standard)
max
x
z = x1 + x2 + x3
2x2 − x3 ≥ 4
x1 − 3x2 + 4x3 = 5
x1 − 2x2 ≤ 3
x1, x2 ≥ 0, x3 ∈ R
307 / 423
Example (continued)
Primal (equivalent non-standard form)
max
x
z = x1 + x2 + x
(1)
3 − x(2)3
− 2x2 + x(1)3 − x(2)3 ≤ −4
x1 − 3x2 + 4x(1)3 − 4x(2)3 ≤ 5
−x1 + 3x2 − 4x(1)3 + 4x(2)3 ≤ −5
x1 − 2x2 ≤ 3
x1, x2, x
(1)
3 , x
(2)
3 ≥ 0
308 / 423
Example (continued)
From here, to obtain the dual,
• we create m dual variables y1 . . . , ym;
• we switch the objective function and RHS coefficients;
• the new objective is labelled w;
• the max becomes a min;
• we transpose the A coefficient matrix to obtain n constraints;
• all the inequalities flip.
309 / 423
Example (continued)
Dual (equivalent non-standard)
min
y
w = −4y1 + 5y(1)2 − 5y(2)2 + 3y3
y
(1)
2 − y(2)2 + y3 ≥ 1
−2y1 − 3y(1)2 + 3y(2)2 − 2y3 ≥ 1
y1 + 4y
(1)
2 − 4y(2)2 ≥ 1
−y1 − 4y(1)2 + 4y(2)2 ≥ −1
y1, y
(1)
2 , y
(2)
2 , y3 ≥ 0
310 / 423
Example (continued)
Now we can perform the reverse of some of the steps we have
taken to get to pseudo-standard form to yield the non-standard
dual form.
Dual (non-standard)
min
y
w = −4y1 + 5y2 + 3y3
y2 + y3 ≥ 1
−2y1 − 3y2 − 2y3 ≥ 1
y1 + 4y2 = 1
y1, y3 ≥ 0, y2 ∈ R
311 / 423
Streamlining the Conversion
• An equality constraint in the primal LP generates a dual
variable that is unrestricted in sign.
• An unrestricted variable in the primal LP generates an
equality constraint in the dual LP.
Primal LP Dual LP
max min
Constraint i Variable i
≤ form yi ≥ 0
= form yi ∈ R
hline Variable j Constraint j
xj ≥ 0 ≥ form
xj ∈ R = form
Costs Resources
Resources Costs
312 / 423
Primal-dual in Matrix Form
ai: ith row of A = (aij)m×n, i = 1, . . . ,m.
aj : jth column of A = (aij)m×n, j = 1, . . . , n.
c : 1× n; x : n× 1; b : m× 1; y : 1×m
Primal
max
x
z = cx
aix ≤ bi, i ∈M
aix = bi, i ∈M
xj ≥ 0, j ∈ N
xj ∈ R , j ∈ N
Dual
min
y
w = yb
yaj ≥ cj , j ∈ N
yaj = cj , j ∈ N
yi ≥ 0, i ∈M
yi ∈ R , i ∈M
M,M : row-index sets which partition {1, . . . ,m}
N,N : column-index sets which partition {1, . . . , n}
313 / 423
The Dual of the Dual (General)
Theorem 10.1
Under the setting above, if a linear program (D) is the dual of
another linear program (P), then the dual linear program of (D) is
(P).
In other words, the dual of the dual is the primal.
Proof: Exercise.
314 / 423
Example 10.6
Please note that this example is an infeasible LP problem – we are
using it purely to demonstrate the conversion from primal to dual.
Primal (non-standard)
max
x
z = 5x1 + 4x2
3x1 − 8x2 ≥ −6
x1 + 6x2 = −5
8x1 + 3x2 = 10
x2 ≥ 0, x1 ∈ R
315 / 423
Example (continued)
We first convert ≥ constraints to ≤ constraints.
Primal (equivalent non-standard form)
max
x
z = 5x1 + 4x2
−3x1 + 8x2 ≤ 6
x1 + 6x2 = −5
8x1 + 3x2 = 10
x2 ≥ 0, x1 ∈ R .
316 / 423
Example (continued)
Dual (non-standard form)
min
y
w = 6y1 − 5y2 + 10y3
−3y1 + y2 + 8y3 = 5
8y1 + 6y2 + 3y3 ≥ 4
y1 ≥ 0, y2, y3 ∈ R
317 / 423
§10.4 – The Duality Theorems
Next we look at some interesting and important relationships
between the primal and the dual.
These relationships can provide us with information about our
problem while we are trying to solve it.
They are also important for numerous applications of linear
programming in many domains.
318 / 423
Weak Duality
Theorem 10.2 (Weak Duality Theorem)
Consider the following primal-dual pair:
Primal
max
x
z = cx
Ax ≤ b
x ≥ 0
Dual
min
y
w = yb
yA ≥ c
y ≥ 0
If x is a feasible solution to the primal problem and y is a feasible
solution to the dual problem, then
cx ≤ yb.
319 / 423
Proof
Suppose x is a feasible solution to the primal problem and y is a
feasible solution to the dual problem. Then
Ax ≤ b, x ≥ 0
yA ≥ c, y ≥ 0
Since y ≥ 0, multiplying both sides of Ax ≤ b by y gives
yAx ≤ yb.
Since x ≥ 0, multiplying both sides of yA ≥ c by x yields
yAx ≥ cx.
Combining these two inequalities, we obtain
cx ≤ yb
as required.
320 / 423
Theorem 10.2 means that the value of the primal objective
function at any primal feasible solution is bounded above by the
value of the dual objective function at any dual feasible solution.
In particular,
• the optimal objective function value for the primal LP is
bounded above by the objective value of any feasible solution
of the dual LP;
• the optimal objective function value for the dual LP is
bounded below by the objective value of any feasible solution
of the primal LP.
321 / 423
Corollary 10.1
If the objective function of the primal is unbounded on the feasible
region, then the dual has no feasible solutions.
Corollary 10.2
If the objective function of the dual is unbounded on the feasible
region, then the primal has no feasible solutions.
It is possible that both the primal and the dual problems have no
feasible solutions.
322 / 423
Corollary 10.3
Let x∗ be a feasible solution to the primal and y∗ be a feasible
solution to the dual. If
cx∗ = y∗b,
then x∗ must be an optimal solution to the primal and y∗ must be
an optimal solution to the dual.
323 / 423
Proof:
From the Weak Duality Theorem we have, for any feasible solution
x of the primal and any feasible solution y of the dual,
cx ≤ yb.
So cx ≤ y∗b, which means that cx ≤ cx∗ and so x∗ is optimal.
Similarly y∗b ≤ yb, and y∗ is an optimal solution to the dual.
324 / 423
Strong Duality
Theorem 10.3 (Strong Duality Theorem)
Consider the following primal-dual pair:
Primal
max
x
z = cx
Ax ≤ b
x ≥ 0
Dual
min
y
w = yb
yA ≥ c
y ≥ 0
If an optimal solution exists for either the primal or its dual, then
an optimal solution exists for both and the corresponding optimal
objective function values are equal, that is z∗ = w∗.
325 / 423
The algebra of the Simplex Method
To prove the Strong Duality Theorem, we need to look at the
algebra of the Simplex Algorithm in more detail. Consider the LP
max z = cx
Ax = b
x ≥ 0
where A = (aij)m×n has rank m and
c = (c1, . . . , cn), x =
$%&x1...
xn
'() , b =
$%& b1...
bm
'() .
326 / 423
The algebra of the Simplex Method
Recall that a basic solution is obtained by choosing m independent
columns of A to correspond to basic variables, setting the
non-basic variables to zero, and solving the equation Ax = b.
Let AB be the m×m submatrix of A consisting of the columns
corresponding to basic variables, and AN be the m× (n−m)
submatrix of A consisting of the columns corresponding to
non-basic variables.
327 / 423
The algebra of the Simplex Method
Relabelling the variables when necessary, we write
A = (AN , AB), x =
*
xN
xB
+
,
where the (n−m)× 1 vector xN consists of non-basic variables
and the m× 1 vector xB consists of basic variables.
Partition c = (cN , cB) in the same way as x.
The tableau is then
BV xN xB RHS
? AN AB b
z −cN −cB 0
328 / 423
The algebra of the Simplex Method
We now use the structure of the tableau to write the basic
variables in terms of the non-basic variables.
Ax = b gives (AN , AB)
*
xN
xB
+
= b, so that
ANxN +ABxB = b.
Since AB is invertible (as its columns are linearly independent),
we have
xB = A
−1
B b−A−1B ANxN .
329 / 423
The algebra of the Simplex Method
For the value of the objective function, we also have
z = cx
= (cN , cB)
*
xN
xB
+
= cNxN + cBxB
= cNxN + cB(A
−1
B b−A−1B ANxN )
= cBA
−1
B b− (cBA−1B AN − cN )xN
and so
z + (cBA
−1
B AN − cN )xN = cBA−1B b
330 / 423
The algebra of the Simplex Method
Now assume that the tableau is in canonical form. Then the
reduced costs of the basic variables are equal to zero, and its
structure is
BV xN xB RHS
xB A
−1
B AN I A
−1
B b
z cBA
−1
B AN − cN 0 cBA−1B b
331 / 423
The algebra of the Simplex Method
By setting xN = 0, this gives the basic solution
x =
*
0
A−1B b
+
,
which is feasible if and only if A−1B b ≥ 0. If x is feasible, the
objective value at x is
z = cBA
−1
B b.
The reduced cost of non-basic xj is
cBA
−1
B · (jth column of A)− cj .
The 0-matrix under the column xB in the tableau can be viewed
as cBA
−1
B AB − cB.
332 / 423
Summary: How is the Tableau Updated?
The starting tableau is
BV xN xB RHS
? AN AB b
z −cN −cB 0
or, in compact form
BV x RHS
? A b
z −c 0
333 / 423
Summary: How is the Tableau Updated?
The canonical tableau is
BV xN xB RHS
xB A
−1
B AN I A
−1
B b
z cBA
−1
B AN − cN 0 cBA−1B b
or, in compact form
BV x RHS
xB A
−1
B A A
−1
B b
z cBA
−1
B A− c cBA−1B b
Observe that we get from the starting tableau to this tableau by
multiplying on the left by the matrix(
A−1B 0
cBA
−1
B 1
)
334 / 423
Proof of the Strong Duality Theorem
Recall that we are dealing with
Primal
max
x
z = cx
Ax ≤ b
x ≥ 0
Dual
min
y
w = yb
yA ≥ c
y ≥ 0
A : m× n; b : m× 1; c : 1× n; x : n× 1; y : 1×m
335 / 423
Introducing slack variables xs =
$%&xn+1...
xn+m
'(), the primal problem can
be converted to:
max z = cx+ 0xs
(A, I)
*
x
xs
+
= b
x, xs ≥ 0
where I is the m×m identity matrix.
Suppose the primal problem has an optimal solution. By the
Fundamental Theorem of Linear Programming, the above problem
therefore has an optimal basic feasible solution (x∗, x∗s).
Let AB be the matrix of the columns of (A, I) corresponding to
the basic variables in (x∗, x∗s).
336 / 423
Now we use the result from the “algebra of the Simplex Method”
with A and x replaced by (A, I) and
*
x
xs
+
, respectively.
Initial tableau:
BV x xs RHS
xs A I b
z −c 0 0
Tableau for the optimal solution (x∗, x∗s):
BV x xs RHS
xB A
−1
B A A
−1
B I A
−1
B b
z cBA
−1
B A− c cBA−1B I − 0 cBA−1B b
337 / 423
Since this is the tableau for the optimal (x∗, x∗s), we have
cBA
−1
B A− c ≥ 0
cBA
−1
B I − 0 = cBA−1B ≥ 0
and the optimal value is z∗ = cx∗ = cBA−1B b.
Let y∗ = cBA−1B . The inequalities above can be written as
y∗A ≥ c, y∗ ≥ 0.
In other words, y∗ is a feasible solution to the dual problem.
Since cx∗ = cBA−1B b = y
∗b, by Corollary 10.3, y∗ is an optimal
solution to the dual problem and moreover w∗ = y∗b = cx∗ = z∗.
338 / 423
We have proved that if the primal has an optimal solution, then so
does the dual, and moreover their optimal objective values are
equal.
Note that the dual can be expressed as
max−w = −yb, −yA ≤ −c, y ≥ 0
and its dual is the primal. From what we proved above it follows
that if the dual has an optimal solution, then so does the primal,
and their optimal objective values are equal.
This completes the proof of the Strong Duality Theorem.
339 / 423
Solving the Primal and the Dual at One Time
The proof above implies the following: If
BV x xs RHS
xB A
−1
B A A
−1
B I A
−1
B b
z cBA
−1
B A− c cBA−1B I − 0 cBA−1B b
is the optimal tableau for the primal, then cBA
−1
B is an optimal
solution to the dual and cBA
−1
B A− c yields the values of the
surplus variables in the dual problem.
340 / 423
Theorem 10.4
In an optimal tableau for the primal (in standard form), the
reduced costs for the slack variables give rise to an optimal solution
to the dual problem, and the reduced costs for the original variables
give rise to the values of the surplus variables in the dual problem.
Thus, when solving the primal LP (in standard form) by using the
Simplex Algorithm, we also solve the dual LP.
341 / 423
Example 10.7
Initial Tableau for the Primal Problem
BV x1 x2 x3 x4 x5 RHS
x4 8 6 4 1 0 100
x5 5 4 4 0 1 60
z −80 −60 −50 0 0 0
where x4 and x5 are slack variables (introduced when converting
standard form to canonical form).
342 / 423
Example (continued)
After a few pivot operations ......
Final Tableau for the Primal Problem
BV x1 x2 x3 x4 x5 RHS
x4 0 −2/5 −12/5 1 −8/5 4
x1 1 4/5 4/5 0 1/5 12
z 0 4 14 0 16 960
From this optimal tableau for the primal we know that
• (x1, x2, x3) = (12, 0, 0) is an optimal solution to the primal
problem;
• (y1, y2) = (0, 16) is an optimal solution to the dual problem;
• and both problems have the optimal objective value
z∗ = w∗ = 960.
343 / 423
Example 10.8
Primal Dual
max z = x1 + x2 + x3 minw = y1 + y2 + y3
s.t. s.t.
8x1 + 4x2 + 2x3 ≤ 1 8y1 + 2y2 + y3 ≥ 1
2x1 + 8x2 + 4x3 ≤ 1 4y1 + 8y2 + 2y3 ≥ 1
x1 + 2x2 + 8x3 ≤ 1 2y1 + 4y2 + 8y3 ≥ 1
x1, x2, x3 ≥ 0 y1, y2, y3 ≥ 0
344 / 423
Initial tableau for the primal problem (where x4, x5, x6 are slack
variables):
x1 x2 x3 x4 x5 x6 RHS
x4 8 4 2 1 0 0 1
x5 2 8 4 0 1 0 1
x6 1 2 8 0 0 1 1
z −1 −1 −1 0 0 0 0
Next tableau:
x1 x2 x3 x4 x5 x6 RHS
x1 1 1/2 1/4 1/8 0 0 1/8
x5 0 7 7/2 −1/4 1 0 3/4
x6 0 3/2 31/4 −1/8 0 1 7/8
z 0 −1/2 −3/4 1/8 0 0 1/8
345 / 423
This goes on to
x1 x2 x3 x4 x5 x6 RHS
x1 1 14/31 0 4/31 0 −1/31 3/31
x5 0 196/31 0 −6/31 1 −14/31 11/31
x3 0 6/31 1 −1/62 0 4/31 7/62
z 0 −11/31 0 7/62 0 3/31 13/62
and
x1 x2 x3 x4 x5 x6 RHS
x1 1 0 0 1/7 −1/14 0 1/14
x2 0 1 0 −3/98 31/196 −1/14 11/196
x3 0 0 1 −1/98 −3/98 1/7 5/49
z 0 0 0 5/49 11/196 1/14 45/196
346 / 423
From the final tableau we have
• (x1, x2, x3) = (1/14, 11/196, 5/49) is an optimal solution to
the primal problem;
• (y1, y2, y3) = (5/49, 11/196, 1/14) is an optimal solution to
the dual problem; and
• the optimal objective values for both problems is 45/196.
347 / 423
§10.5 – Complementary Slackness Conditions
Again we consider
Primal
max
x
z = cx
Ax ≤ b
x ≥ 0
Dual
min
y
w = yb
yA ≥ c
y ≥ 0
where, as before,
A = (aij)m×n; b =
$%& b1...
bm
'() ; c = (c1, . . . , cn);x =
$%&x1...
xn
'() ; y = (y1, . . . , ym)
348 / 423
Let ai denote the ith row of A, i = 1, . . . ,m.
Let aj denote the jth column of A, j = 1, . . . , n.
Then the two problems can be written as
Primal
max
x
z = cx
aix ≤ bi, i = 1, . . . ,m
x ≥ 0
Dual
min
y
w = yb
yaj ≥ cj , j = 1, . . . , n
y ≥ 0
Recall that
• the ith dual variable yi corresponds to the ith functional
constraint aix ≤ bi of the primal problem; and
• the jth primal variable xj corresponds to the jth functional
constraint yaj ≥ cj of the dual problem.
349 / 423
Theorem 10.5 (The Complementary Slackness Conditions)
Under the setting above, let x be a feasible solution to the primal
problem and y a feasible solution to the dual problem.
Then x is optimal to the primal and y is optimal to the dual if and
only if
yi(bi − aix) = 0, i = 1, . . . ,m
(yaj − cj)xj = 0, j = 1, . . . , n,
that is,
yi
45bi − n#
j=1
aijxj
67 = 0, i = 1, . . . ,m
8
m#
i=1
yiaij − cj
9
xj = 0, j = 1, . . . , n
350 / 423
The Complementary Slackness Conditions state that
• if a primal constraint is non-binding, then the corresponding
dual variable is zero (that is, if a dual variable is non-zero,
then the corresponding primal constraint is binding), and
• if a dual constraint is non-binding, then the corresponding
primal variable is zero (that is, if a primal variable is non-zero,
then the corresponding dual constraint is binding).
In other words,
• either a primal variable is zero or the corresponding dual
constraint is satisfied with equality, and
• either a primal constraint is satisfied with equality or the
corresponding dual variable is zero.
The Complementary Slackness Conditions constitute one of the
most important results in linear programming.
351 / 423
Proof: Introduce the slack variables s1, s2, . . . sm for the primal
problem and the surplus variables t1, t2, . . . tn for the dual problem.
Primal LP
max
x,s
z = c1x1 + ...+ cnxn + 0s1 + 0s2 + ...+ 0sm
a11x1 + a12x2 + · · ·+ a1nxn + s1 = b1
a21x1 + a22x2 + · · ·+ a2nxn + s2 = b2
...
am1x1 + am2x2 + · · ·+ amnxn + sm = bm
x1, x2, ..., xn, s1, s2, ..., sm ≥ 0.
Note that
si = bi − aix, i = 1, . . . ,m.
352 / 423
Dual LP
min
y,t
w = b1y1 + ...+ bmym + 0t1 + ...+ 0tn
a11y1 + a21y2 + · · ·+ am1ym − t1 = c1
a12y1 + a22y2 + · · ·+ am2ym − t2 = c2
...
a1ny1 + a2ny2 + · · ·+ amnym − tn = cn
y1, y2, ..., ym, t1, t2, ..., tn ≥ 0.
Note that
tj = yaj − cj , j = 1, . . . , n.
353 / 423
The ith constraint of the primal LP is
n#
j=1
aijxj + si = bi.
Multiplying this by yi, and summing for i = 1 to m, we have
m#
i=1
yi
45 n#
j=1
aijxj + si
67 = m#
i=1
biyi
n#
j=1
xj
8
m#
i=1
yiaij
9
+
m#
i=1
yisi =
m#
i=1
biyi
354 / 423
Now subtract
!n
j=1 cjxj from both sides to get
n#
j=1
xj
8
m#
i=1
yiaij − cj
9
+
m#
i=1
yisi =
m#
i=1
biyi −
n#
j=1
cjxj
Since tj = yaj − cj =
!m
i=1 yiaij − cj , this can be rewritten as
n#
j=1
xjtj +
m#
i=1
yisi =
m#
i=1
biyi −
n#
j=1
cjxj
355 / 423
By the Strong Duality Theorem, if x and y are optimal solutions to
the primal and dual respectively, then
m#
i=1
biyi =
n#
j=1
cjxj
and so the above equation becomes
n#
j=1
xjtj +
m#
i=1
yisi = 0
Since all terms are nonnegative, this implies that each term is
zero, that is,
yi(bi − aix) = yisi = 0, i = 1, . . . ,m
(yaj − cj)xj = xjtj = 0, j = 1, . . . , n
356 / 423
On the other hand, if xjtj = 0 for all j and yisi = 0 for all i, then
n#
j=1
cjxj =
m#
i=1
yibi
and so x and y are optimal by Corollary 10.3.
357 / 423
Example
In Example 10.7 the final tableau for the primal problem is:
BV x1 x2 x3 x4 x5 RHS
x4 0 −2/5 −12/5 1 −8/5 4
x1 1 4/5 4/5 0 1/5 12
z 0 4 14 0 16 960
(Note that x4, x5 take the roles of s1, s2 respectively.)
So (x1, x2, x3) = (12, 0, 0) and (y1, y2) = (0, 16) are optimal
solutions to the primal and dual problems respectively.
We have (s1, s2) = (4, 0) and (t1, t2, t3) = (0, 4, 14). The values of
t1, t2, t3 are exactly the reduced costs for the original variables!
We see that yisi = 0 for i = 1, 2 and tjxj = 0 for j = 1, 2, 3.
That is, if yi ∕= 0 then si = 0, and if si ∕= 0 then yi = 0. If xj ∕= 0
then tj = 0, and if tj ∕= 0 then xj = 0.
358 / 423
Applications of the Complementary Slackness Conditions
The Complementary Slackness Theorem has a number of
applications. It can be used to
• calculate the solution of the dual (respectively primal) when
the solution of the primal (respectively dual) is known,
• verify whether a proposed solution of either the primal or dual
is optimal, and
• for certain structured problems it can be used to design an
algorithm to solve the problem.
We will give an example for each of the first two applications.
For the third application, if you choose to do the MSc subject
“Network Optimisation” (which is offered in odd years), you will
see beautiful applications of the complementary slackness
conditions in algorithm design.
359 / 423
Example 10.9
The linear program
max
x
z = 4x1 + x2
x1 − x2 ≤ 1
5x1 + x2 ≤ 55
−x1 + 2x2 ≤ 3
x1, x2 ≥ 0,
has an optimal solution x∗1 = 5, x∗2 = 4. What is the optimal
solution of the dual?
360 / 423
Example (continued)
The dual linear program is
min
y
w = y1 + 55y2 + 3y3
y1 + 5y2 − y3 ≥ 4
−y1 + y2 + 2y3 ≥ 1
y1, y2, y3 ≥ 0.
361 / 423
Example (continued)
The complementary slackness conditions are
y1(1− x1 + x2) = 0
y2(55− 5x1 − x2) = 0
y3(3 + x1 − 2x2) = 0
x1(4− y1 − 5y2 + y3) = 0
x2(1 + y1 − y2 − 2y3) = 0
Notice that these comprise five non-linear equations for the five
variables x1, x2, y1, y2, y3.
362 / 423
Example (continued)
Substituting x∗1 = 5, x∗2 = 4 into the complementary slackness
conditions, we get
0y∗1 = 0
26y∗2 = 0
0y∗3 = 0
5(4− y∗1 − 5y∗2 + y∗3) = 0
4(1 + y∗1 − y∗2 − 2y∗3) = 0
The second equation gives y∗2 = 0 and then the fourth and fifth
equations give y∗1 = 9, y∗3 = 5. Check that this is feasible for the
dual.
The solution to the dual is thus (y∗1, y∗2, y∗3) = (9, 0, 5) at which
point w∗ = 24. Check that this is also equal to z∗.
363 / 423
Example 10.10
For the linear program
max
x
z = 8x1 − 9x2 + 12x3 + 4x4 + 11x5
2x1 − 3x2 + 4x3 + x4 + 3x5 ≤ 1
x1 + 7x2 + 3x3 − 2x4 + x5 ≤ 1
5x1 + 4x2 − 6x3 + 2x4 + 3x5 ≤ 22
x1, . . . , x5 ≥ 0,
it has been proposed that the optimal solution is
x∗1 = 0, x∗2 = 2, x∗3 = 0, x∗4 = 7, x∗5 = 0. Is this correct?
364 / 423
Example (continued)
The dual linear program is
min
y
w = y1 + y2 + 22y3
2y1 + y2 + 5y3 ≥ 8
−3y1 + 7y2 + 4y3 ≥ −9
4y1 + 3y2 − 6y3 ≥ 12
y1 − 2y2 + 2y3 ≥ 4
3y1 + y2 + 3y3 ≥ 11
y1, y2, y3 ≥ 0.
365 / 423
Example (continued)
The primal variables x∗2 and x∗4 are positive. So the complementary
slackness conditions give
−9 + 3y∗1 − 7y∗2 − 4y∗3 = 0
4− y∗1 + 2y∗2 − 2y∗3 = 0.
Also, since the second primal constraint is non-binding, we have
y∗2 = 0.
Solving, we see that y∗1 = 34/10 and y∗3 = 3/10.
366 / 423
Example (continued)
We need to check whether (y∗1, y∗2, y∗3) = (34/10, 0, 3/10) is dual
feasible.
Unfortunately
4y∗1 + 3y

2 − 6y∗3 = 118/10 ∕≥ 12,
and so the solution to the complementary slackness relations is
not feasible.
Therefore the postulated solution is not optimal for the primal.
367 / 423
§10.6 – Weak Duality, Strong Duality and Complementary
Slackness Conditions for General LP
So far we have developed the Weak Duality, Strong Duality and
Complementary Slackness Conditions for primal-dual pairs in
standard form.
It is important to understand that all these results are valid for LP
problems in non-standard form (i.e. possibly with = constraints
and variables which are unrestricted in sign).
The statements of these results under the general setting are
exactly the same as what we saw for standard forms.
In the following we just state the Complementary Slackness
Conditions under the general setting.
368 / 423
Theorem 10.6 (The Complementary Slackness Conditions)
Consider
Primal
max
x
z = cx
aix ≤ bi, i ∈M
aix = bi, i ∈M
xj ≥ 0, j ∈ N
xj ∈ R , j ∈ N
Dual
min
y
w = yb
yaj ≥ cj , j ∈ N
yaj = cj , j ∈ N
yi ≥ 0, i ∈M
yi ∈ R , i ∈M
Let x be a feasible solution to the primal problem and y a feasible
solution to the dual problem. Then x is optimal to the primal and
y is optimal to the dual if and only if
yi(bi − aix) = 0, for all i
(yaj − cj)xj = 0, for all j
369 / 423
§11 – Sensitivity Analysis
So far we have assumed that the various parameters that describe
a linear programming problem are all given.
However, in real life, this is rarely the case. For example, we might
have only an estimate of the profit that we can make on each of
our products.
In such situations we want to know how the solution to a linear
programming problem will vary if the parameters change.
Sensitivity Analysis is the subject which deals with these issues.
This section is intended to be a very brief introduction to
Sensitivity Analysis.
370 / 423
Ingredients of LP Models:
• A linear objective function
• A system of linear constraints
◦ RHS values
◦ Coefficient matrix (LHS)
◦ Signs (=, ≤, ≥)
Problem 11.1
How does the optimal solution change as some of these elements
change?
It is instructive to classify the changes into two categories:
• structural changes, and
• parametric changes.
371 / 423
Structural Changes
These are
• addition of a new decision variable,
• addition of a new constraint,
• loss of a decision variable, and/or
• removal of a constraint.
How is the solution affected if the linear program and its dimension
grow or shrink? Specific questions that we may be interested in are
• Will adding a new decision variable change the optimal
solution?
• How much does the optimal solution change (how much does
the basis change)?
• Does the removal of a particular constraint change the
solution much?
We will not discuss structural changes in this course.
372 / 423
Parametric Changes
These are
• changes in one or more of the coefficients ci of the objective
function,
• changes in the RHS values bj , or
• changes in the coefficients of the coefficient matrix A = (aij).
How is the solution affected by such perturbations? For example:
• How much can I change the RHS bj before the basis changes?
• How much can I change the cost coefficients ci before the
basis changes?
• What is the percentage change in the objective function value
when the first two occur?
373 / 423
We will discuss what to do if there are post-hoc changes in bj or ci.
There are two important aspects of the new solution that we need
to check:
• feasibility, and
• optimality.
So first we need to be able to calculate the new solution.
Simplex Algebra provides the required formulas.
374 / 423
Review: Algebra of the Simplex Method
max z = cx
Ax = b
x ≥ 0
where A = (aij)m×n is assumed to have rank m and
c = (c1, . . . , cn), x =
$%&x1...
xn
'() , b =
$%& b1...
bm
'() .
Suppose we have an optimal basic feasible solution whose basis
matrix is AB (which must be a non-singular m×m submatrix of
A). Let AN be the m× (n−m) submatrix of A consisting of the
rest of the columns.
375 / 423
Review: How is the tableau updated?
Tableau (which may be non-canonical):
BV xN xB RHS
– AN AB b
z −cN −cB 0
Optimal tableau:
BV xN xB RHS
xB A
−1
B AN I A
−1
B b
z cBA
−1
B AN − cN 0 cBA−1B b
So the optimal tableau has RHS column
bˆ = A−1B b
and reduced costs (of the non-basic variables)
−cˆN = cBA−1B AN − cN .
These tell us how the new RHS column bˆ and the new cost
coefficients cˆN change as b and/or c change.
376 / 423
§11.1 – Principles
Suppose that we want to know what happens if bj has changed
from our original estimate.
We calculate the new bˆ(new) according to the formula
bˆ(new) = A−1B b
(new)
on the previous slide.
The optimal solution for the original problem becomes infeasible
for the modified problem if and only if at least one component of
bˆ(new) is negative. If this happens, we may need to perform a
number of actions to find the new solution (if one even exists).
This would involve introducing new artificial variables before
executing the Two-Phase Simplex method.
If the optimal solution for the original problem is feasible for the
new problem, then it is still optimal. How do we know this?
377 / 423
What about changes to ci?
We calculate the new −cˆ(new)N (reduced costs) using
−cˆ(new)N = c(new)B A−1B AN − c(new)N .
Can this change make our current optimal solution infeasible?
It is still optimal if and only if all reduced costs are nonnegative
(for a maximisation problem).
If it is no longer optimal, then at least one of the reduced costs will
be negative. We therefore continue with the Simplex Algorithm as
usual.
378 / 423
Summary of the situation when there are changes to b or c
If all the components of the new RHS are nonnegative, then
• the point with the same basic variables as the old optimal
solution is still feasible for the modified problem;
• the new values of the basic variables are simply equal to the
new values of the RHS;
• if there are changes to c also, the resulting point may not
remain optimal. This depends on what happens to the new
reduced costs.
If at least one of the components of the new RHS is negative, then
• the point with the same basic variables as the old optimal
solution is not feasible for the modified problem. Corrective
action must be taken in order to solve the new system or
prove that its infeasible. This will involve setting up a new
instance of the Two-Phase Simplex method.
379 / 423
If all the components of −cˆN satisfy the optimality condition, and
all the components of the new RHS are nonnegative, then
• the basic variables in the old optimal solution are still basic in
the new optimal solution. The new tableau is still optimal and
the solution can be read off.
If at least one of the components of −cˆN violates the optimality
condition,and all the components of the new RHS are nonnegative,
then
• the basic variables in the old optimal solution are not basic in
the new optimal solution. We need to perform one or more
pivot operations to get the new tableau.
380 / 423
Case 1: A negative value occurs in the RHS column after
changing b
Consider the following Simplex tableau satisfying the Optimality
Criterion.
BV x1 x2 x3 x4 x5 RHS
x2 0 1 −1 2 0 20
x5 0 0 −1 1 1 5
x1 1 0 1 −1 0 10
z 0 0 1 2 0 100
The current optimal solution is x∗ = (10, 20, 0, 0, 5) with z∗ = 100.
381 / 423
Case 1: A negative value occurs in the RHS column after
changing b
Suppose that a change to the original value of b produces the
following new RHS
BV x1 x2 x3 x4 x5 RHS
x2 0 1 −1 2 0 20
x5 0 0 −1 1 1 -5
x1 1 0 1 −1 0 10
z 0 0 1 2 0 100
382 / 423
Case 1: A negative value occurs in the RHS column after
changing b
We can multiple the second row by −1 in order to attempt to
restore canonical form
BV x1 x2 x3 x4 x5 RHS
x2 0 1 −1 2 0 20
x5 0 0 1 −1 −1 5
x1 1 0 1 −1 0 10
z 0 0 1 2 0 100
383 / 423
Case 1: A negative value occurs in the RHS column after
changing b
Note that the system is still NOT in canonical form. It is not
obvious how to proceed using pivot operations alone. In fact, the
new system may not even be feasible.
BV x1 x2 x3 x4 x5 RHS
x2 0 1 −1 2 0 20
x5 0 0 1 −1 −1 5
x1 1 0 1 −1 0 10
z 0 0 1 2 0 100
384 / 423
Case 1: A negative value occurs in the RHS column after
changing b
We introduce a new artificial variable y1. We then append a
column for y1 to the Simplex tableau and initialize Phase 1 of the
Two-Phase Simplex method (i.e., we maximise w = −y1). Observe
that y1 replaces x5 as a basic variable. Note that the system is not
yet in canonical form.
BV y1 x1 x2 x3 x4 x5 RHS
x2 0 0 1 −1 2 0 20
y1 1 0 0 1 −1 −1 5
x1 0 1 0 1 −1 0 10
w 1 0 0 0 0 0 0
385 / 423
Case 1: A negative value occurs in the RHS column after
changing b
We convert to canonical form by subtracting the y2 row from the
w row.
BV y1 x1 x2 x3 x4 x5 RHS
x2 0 0 1 −1 2 0 20
y1 1 0 0 1 −1 −1 5
x1 0 1 0 1 −1 0 10
w 0 0 0 −1 1 1 -5
386 / 423
Case 1: A negative value occurs in the RHS column after
changing b
One pivot operation following the Greedy rule and Ratio test gives
the final Phase 1 tableau:
BV y1 x1 x2 x3 x4 x5 RHS
x2 1 0 1 0 1 −1 25
x3 1 0 0 1 −1 −1 5
x1 −1 1 0 0 0 1 5
w 1 0 0 0 0 0 0
387 / 423
Case 1: A negative value occurs in the RHS column after
changing b
We initialise Phase 2 with basic variables x1, x2, x3 and with the
z-coefficients from the first Simplex tableau.
BV x1 x2 x3 x4 x5 RHS
x2 0 1 0 1 −1 25
x3 0 0 1 −1 −1 5
x1 1 0 0 0 1 5
z 0 0 1 2 0 100
388 / 423
Case 1: A negative value occurs in the RHS column after
changing b
Restoring canonical form we get:
BV x1 x2 x3 x4 x5 RHS
x2 0 1 0 1 −1 25
x3 0 0 1 −1 −1 5
x1 1 0 0 0 1 5
z 0 0 0 3 1 95
Which satisfies the Optimality criterion. The new optimal solution
is x∗ = (5, 25, 5, 0, 0) with z∗ = 95. Observe that the optimal
value of z decreased in total by 5 as a result of the change in b,
and that x3 replaced x5 as an optimal basic variable.
389 / 423
Case 2: A negative value does NOT occur in the RHS
column after changing b
Suppose that a change to the original value of b produces the
following new RHS for the same example as before
BV x1 x2 x3 x4 x5 RHS
x2 0 1 −1 2 0 20
x5 0 0 −1 1 1 1
x1 1 0 1 −1 0 10
z 0 0 1 2 0 100
In this case the modified problem is still feasible and, because only
b has been modified, the z-row remains unchanged and the
Optimality criterion is still satisfied. The new optimal solution is
x∗ = (10, 20, 0, 0, 1) with z∗ = 100.
390 / 423
Case 3: A change in c produces a z-row that no longer
satisfies the Optimality criterion
Suppose that a change to the original value of c produces the
following new z-row for the same example as before
BV x1 x2 x3 x4 x5 RHS
x2 0 1 −1 2 0 20
x5 0 0 −1 1 1 5
x1 1 0 1 −1 0 10
z 0 0 −1 2 0 100
391 / 423
Case 3: A change in c produces a z-row that no longer
satisfies the Optimality criterion
One iteration of the Simplex Algorithm gives the optimal tableau:
BV x1 x2 x3 x4 x5 RHS
x2 1 1 0 1 0 30
x5 1 0 0 o 1 15
x3 1 0 1 −1 0 10
z 1 0 0 1 0 110
Observe that the optimal basic variables have changed and the
new solution is x∗ = (0, 30, 10, 0, 15) with z∗ = 110
392 / 423
Case 4: A change in c produces a z-row that still satisfies
the Optimality criterion
Suppose that a change to the original value of c produces the
following new z-row for the same example as before (note that a
change in a single component of c can simultaneously change a
coefficient in the z-row AND the value of the objective function in
the bottom right-hand corner. How?)
BV x1 x2 x3 x4 x5 RHS
x2 0 1 −1 2 0 20
x5 0 0 −1 1 1 5
x1 1 0 1 −1 0 10
z 0 0 2 2 0 80
Observe that the Optimality criterion is still satisfied. The optimal
basic variables and their values remain unchanged. The optimal
objective value has changed to z∗ = 80
393 / 423
§11.2 – Change in the RHS
Sometimes we want to know how much we can vary the
parameters before the solution is affected. That is, we are
interested in the range of perturbation.
Suppose that we change one of the elements of b, say bk, by δ so
that the new b is equal to the old one except that the new value of
bk is equal to bk + δ. That is,
b(new) = b+ δek
where ek is the kth column of the identity matrix.
394 / 423
The new final RHS value is given by
bˆ(new) = A−1B b
(new) = A−1B (b+ δek)
This yields
bˆ(new) = A−1B b+ δA
−1
B ek
= A−1B b+ δ(A
−1
B ).k
where (A−1B ).k denotes the kth column of A
−1
B .
Since bˆ(old) = A−1B b, we have
bˆ(new) = bˆ(old) + δ(A−1B ).k .
395 / 423
The old basis remains optimal after the change if and only if
bˆ(new) ≥ 0, which occurs if and only if
δ(A−1B ).k ≥ −bˆ(old)
or equivalently
δ(A−1B )i,k ≥ −bˆ(old)i , i = 1, 2, . . . ,m
where (A−1B )i,k denotes the entry of A
−1
B in the ith row and kth
column, and bˆ
(old)
i is the ith component of bˆ
(old).
396 / 423
So the old basis remains optimal if and only if
δ ≥ −bˆ
(old)
i
(A−1B )i,k
for all i such that (A−1B )i,k > 0, and
δ ≤ −bˆ
(old)
i
(A−1B )i,k
for all i such that (A−1B )i,k < 0. We can write this in a more
formal way as follows:
397 / 423
Let
Pk = {i | i ∈ {1, ...,m}, (A−1B )i,k > 0}
and let
Nk = {i | i ∈ {1, ...,m}, (A−1B )i,k < 0}.
Then the old basis remains optimal if and only if
max
i∈Pk
−bˆ(old)i
(A−1B )i,k
≤ δ ≤ min
i∈Nk
−bˆ(old)i
(A−1B )i,k
398 / 423
Example 11.1
If
b =
$&4820
8
')
A−1B =
$&2 4 −160 4 −8
0 −1 3
') ,
how much can we change the second component of b without
changing the optimal solution?
399 / 423
Example (cont’d)
bˆ(old) = A−1B b =
$&2 4 −160 4 −8
0 −1 3
')$&4820
8
') =
$&4816
4
')
(A−1B ).2 =
$& 44
−1
')
Since (A−1B )1,2 = 4 > 0, (A
−1
B )2,2 = 4 > 0, (A
−1
B )3,2 = −1 < 0,
max
i∈P2
−bˆ(old)i
(A−1B )i,2
= max
i=1,2
−bˆ(old)i
(A−1B )i,2
= max
.−48
4
,
−16
4
/
= −4.
min
i∈N2
−bˆ(old)i
(A−1B )i,2
=
−bˆ(old)3
(A−1B )3,2
=
−4
−1 = 4
400 / 423
Example (cont’d)
Thus
4 ≥ δ ≥ −4.
Therefore the old basis (whatever it is) will remain optimal if and
only if the value of b2 is in the interval [20− 4, 20 + 4] = [16, 24].
We also have an alternative method to find the range of δ...
401 / 423
To determine the critical values of δ, we simply compute
bˆ(new) = A−1B b
(new) (= bˆ(old) + δ(A−1B ).k).
Set all its components to be nonnegative and then solve
inequalities.
Equivalently, we can solve
δ(A−1B ).k ≥ −bˆ(old)
that is
δ(A−1B )i,k ≥ −bˆ(old)i , i = 1, 2, . . . ,m.
These alternative approaches should produce the same range of δ.
402 / 423
Example (cont’d)
b(old) = b =
$&4820
8
') , b(new) =
$& 4820 + δ
8
')
bˆ(new) = A−1B b
(new)
=
$&2 4 −160 4 −8
0 −1 3
')$& 4820 + δ
8
')
=
$&48 + 4δ16 + 4δ
4− δ
')
Equivalently,
bˆ(new) = bˆ(old) + δ(A−1B ).2 =
$&4816
4
')+ δ
$& 44
−1
') =
$&48 + 4δ16 + 4δ
4− δ
')
403 / 423
Example (cont’d)
Thus the non-negativity criterion for the basic variables to remain
the same is
48 + 4δ ≥ 0 hence δ ≥ −12
16 + 4δ ≥ 0 hence δ ≥ −4
4− δ ≥ 0 hence δ ≤ 4
which is equivalent to
−4 ≤ δ ≤ 4,
and so
16 ≤ b2 ≤ 24.
Thus we obtain the same result that the old basis remains optimal
if and only if b2 is in the interval [16, 24].
404 / 423
§11.3 – Changes in the Cost Vector
Suppose that the value of ck changes for some k.
How will this affect the optimal solution to the LP problem?
We can distinguish between two cases, when
• xk is not in the old optimal basis, and when
• xk is in the old optimal basis.
405 / 423
Our reasoning makes extensive use of our expression for the cost
coefficients of the final tableau in terms of the original tableau
−cˆN = cBA−1B AN − cN .
So the kth component of −cˆN is given by
(−cˆN )k = (cBA−1B AN )k − ck.
406 / 423
Change in the cost coefficient of a variable not in the
optimal basis
If the original cost coefficient of a variable that is not in the old
optimal basis changes from ck to ck + δ, then
(−cˆ(new)N )k = (cBA−1B AN )k − (ck + δ)
= (−cˆ(old)N )k − δ.
Thus, for a maximisation problem, the basis (and indeed the
optimal solution) remains the same provided that
(−cˆ(old)N )k − δ ≥ 0,
which is the same as saying that
δ ≤ (−cˆ(old)N )k
407 / 423
Example 11.2
Suppose that the reduced costs in the final simplex tableau are
given by
(−cˆ(old)N ,−cˆ(old)B ) = (2, 3, 4, 0, 0, 0)
with x5, x6 and x4 (in order) comprising the final basic variables.
How much can we change the value of c1 without changing the
basis?
408 / 423
Example (cont’d)
First we observe that x1 is not in the basis and that our problem is
a maximisation problem (why?). Note also that k = 1.
Here
(−cˆ(old)N )1 = 2
and so
δ ≤ (−cˆ(old)N )1
is the same as
δ ≤ 2.
This means that as long as the new c1 is less than or equal to the
old c1 plus 2 (i.e. c
(new)
1 ≤ (−∞, c(old)1 + 2], the optimal solution
will not be affected.
Note that we do not need to know the old value of c1 to reach this
conclusion.
409 / 423
Change in the cost coefficient of a variable in the optimal
basis
If the original cost coefficient of a variable that is in the old
optimal basis changes from ck to ck + δ, then
−cˆ(new)N = (cB + δep)A−1B AN − cN
= (cBA
−1
B AN − cN ) + δepA−1B AN
= −cˆ(old)N + δepA−1B AN
= −cˆ(old)N + δ(A−1B AN )p·
where xk corresponds to the pth row in the final tableau, ep is the
pth row of the identity matrix, and (A−1B AN )p· is the pth row of
A−1B AN . (This is because cN is unchanged and ck is a component
of cB.)
Therefore, if we have a maximisation problem, then the optimal
solution remains optimal if and only if
−(cˆ(old)N )j + δ(A−1B AN )p,j ≥ 0, for all non-basic j.
410 / 423
Equivalently,
δ ≥ (cˆ
(old)
N )j
(A−1B AN )p,j
, for all non-basic j with (A−1B AN )p,j > 0
and
δ ≤ (cˆ
(old)
N )j
(A−1B AN )p,j
, for all non-basic j with (A−1B AN )p,j < 0
Thus the optimal solution remains optimal if and only if
max
j
(cˆ
(old)
N )j
(A−1B AN )p,j
≤ δ ≤ min
j
(cˆ
(old)
N )j
(A−1B AN )p,j
,
where maxj is taken over all j such that (A
−1
B AN )p,j > 0 and
minj is over all j such that (A
−1
B AN )p,j < 0.
411 / 423
A direct analysis
The formula above determines the range that δ can fall in without
affecting the optimal solution.
Alternatively, as before, we can use our expression for −cˆ(new)N to
calculate the new cost coefficients in terms of δ and −cˆ(old)N , and
then proceed to determine the range of δ by solving a system of
inequalities.
Another alternative is to use a direct analysis, which we illustrate
in the next example.
412 / 423
Example 11.2 (cont’d)
As before suppose that the reduced costs in the final simplex
tableau are given by
(−cˆ(old)N ,−cˆ(old)B ) = (2, 3, 4, 0, 0, 0)
with x5, x6 and x4 (in the order they appear in the tableau)
comprising the final basic variables.
Suppose that the changes occur in c4.
Since x4 corresponds to row 3 in the final tableau, we need to
know the 3rd row (A−1B AN )3· of A
−1
B AN .
Suppose that this row is (3,−4, 0, 1, 0, 0). Then the final tableau is
BV x1 x2 x3 x4 x5 x6 RHS
x5 – – – 0 1 0 –
x6 – – – 0 0 1 –
x4 3 −4 0 1 0 0 –
z 2 3 4 0 0 0 –
413 / 423
Example 11.2 (cont’d)
If we add δ to the old c4, we would have instead
BV x1 x2 x3 x4 x5 x6 RHS
x5 – – – 0 1 0 –
x6 – – – 0 0 1 –
x4 3 −4 0 1 0 0 –
z 2 3 4 −δ 0 0 –
Restoring the canonical form of the x4 column, we obtain
BV x1 x2 x3 x4 x5 x6 RHS
x5 – – – 0 1 0 –
x6 – – – 0 0 1 –
x4 3 −4 0 1 0 0 –
z 2 + 3δ 3− 4δ 4 0 0 0 –
414 / 423
Example 11.2 (cont’d)
Thus, for a maximisation problem, to ensure that the current basis
remains optimal, we need
2 + 3δ ≥ 0
and
3− 4δ ≥ 0
so that
−2/3 ≤ δ ≤ 3/4.
Thus the old optimal solution remains optimal if we keep the
change in c4 in the interval [−2/3, 3/4], i.e.
c
(new)
4 ∈ [c4 − 2/3, c4 + 3/4] (where c4 means c(old)4 ).
From the tableau above we can see that if δ < −2/3 then x1 will
enter the basis, and if δ > 3/4 then x2 will enter the basis.
415 / 423
Reminder
Ensure that you read questions in this area carefully.
If you are asked about the range of admissible changes in say b1,
then it is not sufficient to report the admissible values of δ. You
have to translate the range of admissible changes in δ into the
range of admissible changes in b1.
The same requirements apply to changes in cost coefficients.
For example, if b1 = 12 and you have obtained −2 ≤ δ ≤ 3, then
the admissible range of b1 is [12− 2, 12 + 3] = [10, 15].
416 / 423
§11.4 – A Hint of Parametric Programming
So far we have considered only the impact of discrete changes in
the cost coefficients or the RHS values. Often one needs to
understand the impact of continuous systematic changes in these
values to the optimal solution. The study of such problems is
usually called parametric (linear) programming.
We will not discuss parametric programming in detail. Instead we
will use an example to illustrate the main idea for parametric
programs where the cost coefficients contain parameters.
417 / 423
Example 11.3
It is known that
max z = x1 + 2x2
x1 + 3x2 ≤ 8
x1 + x2 ≤ 4
x1, x2 ≥ 0
has the following optimal tableau (which can be obtained by using
the Simplex Method):
BV x1 x2 x3 x4 RHS
x2 0 1 1/2 −1/2 2
x1 1 0 −1/2 3/2 2
z 0 0 1/2 1/2 6
where x3 and x4 are slack variables.
418 / 423
Example 11.3 (cont’d)
Solve the following LP problem, where β is a parameter,
0 ≤ β <∞.
max z(β) = (1 + β)x1 + (2− β)x2
x1 + 3x2 ≤ 8
x1 + x2 ≤ 4
x1, x2 ≥ 0
Observe that
• changes occurs in both cost coefficients,
• there is a parameter involved, and
• functional constraints are unchanged.
419 / 423
Example 11.3 (cont’d)
When β = 0 the parametric problem is the same as the
non-parametric problem. Since they have the same functional
constraints, the optimal tableau of the latter gives a basic feasible
solution to the former. However, we need to update the z-row.
Since for the optimal basis we have
z = (1 + β)x1 + (2− β)x2
we have to add −β to the reduced cost of x1 and β to the
reduced cost of x2. This gives the following tableau.
BV x1 x2 x3 x4 RHS
x2 0 1 1/2 −1/2 2
x1 1 0 −1/2 3/2 2
z −β β 1/2 1/2 6
420 / 423
Example 11.3 (cont’d)
Restoring canonical form by using the elementary row operations
R′3 = R3 − βR1 + βR2, we obtain:
BV x1 x2 x3 x4 RHS
x2 0 1 1/2 −1/2 2
x1 1 0 −1/2 3/2 2
z 0 0 1/2− β 1/2 + 2β 6
This tableau is optimal if and only if
1/2− β ≥ 0 and 1/2 + 2β ≥ 0.
Thus the tableau above is optimal if and only if
0 ≤ β ≤ 1/2,
and in this case
x∗(β) = (2, 2), z∗(β) = 6.
421 / 423
Example 11.3 (cont’d)
Since the upper bound 1/2 of β corresponds to x3, x3 should be
the entering variable when β > 1/2. The ratio test shows that we
should pivot on the (i = 1, j = 3)-entry and take x2 out of the
basis. Pivoting on this entry we obtain:
BV x1 x2 x3 x4 RHS
x3 0 2 1 −1 4
x1 1 1 0 1 4
z 0 −1 + 2β 0 1 + β 4 + 4β
This tableau is optimal if and only if
−1 + 2β ≥ 0 and 1 + β ≥ 0,
that is,
1/2 ≤ β.
In this case we have
x∗(β) = (4, 0), z∗(β) = 4 + 4β.
422 / 423
Example 11.3 (cont’d)
Summary:
When 0 ≤ β ≤ 1/2, the optimal solution is
x∗(β) = (2, 2)
and the optimal value is
z∗(β) = 6.
When 1/2 ≤ β <∞, the optimal solution is
x∗(β) = (4, 0)
and the optimal value is
z∗(β) = 4 + 4β.
423 / 423

欢迎咨询51作业君
51作业君

Email:51zuoyejun

@gmail.com

添加客服微信: abby12468