Applied Mathematical Programming Lecture Notes PDF

Title Applied Mathematical Programming Lecture Notes
Course Applied Programming
Institution University of Bedfordshire
Pages 33
File Size 943.2 KB
File Type PDF
Total Downloads 28
Total Views 149

Summary

Detailed notes on Applied mathematical programming using algebraic system...


Description

Applied Mathematical Programming Using Algebraic Systems

Class Notes 2017/2018

Table of Contents Linear Programming (LP) theory ______________________________________________ 4

Graphical illustration of LP ________________________________________________ 4 Normal solution illustration _______________________________________________ 4 Unwanted solutions ______________________________________________________ 5 Unbounded solution ___________________________________________________ 5 Infeasible solution _____________________________________________________ 6 Multiple optima _______________________________________________________ 6 Degenerate solution____________________________________________________ 6 Algebraic Representation of LP_____________________________________________ 8 Constraints_____________________________________________________________ 8 Introducing slack variables ________________________________________________ 8 Matrix Algebra _________________________________________________________ 8 Matrix Formulation of the Linear Programming Problem ______________________ 8 Solving LP's by Matrix Algebra __________________________________________ 9 Primal-Dual Solution Inter-Relationships____________________________________ 12 Objective Function Interrelationships _____________________________________ 13 Constructing Dual Solutions ____________________________________________ 13 Complementary Slackness _____________________________________________ 14 Zero Profits _________________________________________________________ 15 Degeneracy and Shadow Prices ___________________________________________ 15 Solutions and Their Interpretation__________________________________________ 16 Fixing improper models _________________________________________________ 17 Infeasible models_____________________________________________________ 17 Unbounded models ___________________________________________________ 18 LP-Formulations __________________________________________________________ 18 Resource allocation ______________________________________________________ 18 Transportation__________________________________________________________ 19 Mixing_________________________________________________________________ 19 Multi-input, Multi-output_________________________________________________ 19 Spatial equilibrium (class example from GAMS block session) __________________ 19 Sequencing _____________________________________________________________ 20 Storage ________________________________________________________________ 21 Multi-objective programming _____________________________________________ 21 Lexicographic preferences _______________________________________________ 21 Weighted preferences ___________________________________________________ 21 Separable Programming (Non-linear, well-behaved functions) __________________ 22 Risk programming ______________________________________________________ 23

Dynamic linear programming _____________________________________________ 23 Disequilibrium - Known Life _____________________________________________ 23 Disequilibrium - Unknown Life ___________________________________________ 25 Comments __________________________________________________________ 25 Equilibrium - Known Life________________________________________________ 25 Comments __________________________________________________________ 27 2

Equilibrium - Unknown Life______________________________________________ 27 Overall Comments on Dynamic Linear Programming __________________________ 27 Recursive Programming __________________________________________________ 28 Comments on Recursive Programming______________________________________ 28 Integer programming _______________________________________________________ 28 Fixed Costs _____________________________________________________________ 28 Fixed Capacity __________________________________________________________ 29 Minimum Habitat Size ___________________________________________________ 29

Knapsack ______________________________________________________________ 30 No item repetition ____________________________________________________ 30 Item repetition _______________________________________________________ 30 Warehouse____________________________________________________________ 30 Logical Conditions _____________________________________________________ 30 Mutual exclusive products _____________________________________________ 30 Either-Or-Active constraints ____________________________________________ 31 Multiple Active Constraints. ____________________________________________ 31 Distinct Variable Values _________________________________________________ 31 Non-linear representation ________________________________________________ 31 Approximation of not well behaved non-linear functions _______________________ 32

3

Linear Programming (LP) theory A linear program is the mathematical formulation of an optimization problem. It consists of n nonnegative decision variables, 1 free objective variable, 1 objective function, and m constraints, which restrict and/or interrelate the decision variables.

Graphical illustration of LP Normal solution illustration

Linear Programming Example

Feasibility Region X2

Max 2*X1 + 3*X2 = z s.t. X1 + 2*X2 ≤ 10 X1 ,

X2 ≥

Max s.t.

2*X1 + 3*X2 = z X1 + 2*X2 ≤ 10 X1 , X2 ≥ 0

5

0

X1,X2 ≥ 0 X1

X2 ≤ 5 – 0,5*X1 10

 The feasibility region of an LP is a convex set. A convex set is a set, where all convex combinations of points pertaining to the set will also pertain to the set.

Objective Function Isoclines - 1

Feasibility Region X2

X2 Max 2*X1 + 3*X2 = z s.t. X1 + 2*X2 ≤ 10 X1 ,

5

5

X2 ≥ 0

X2 = z/3 – 2/3*X1 Convex Set X1

X1

10

X2=2–2/3*X1 z=6

Objective Function Isoclines - 2

10

Graphical Solution

X2

X2 Max 2*X1 + 3*X2 = z s.t. X1 + 2*X2 ≤ 10 X1 ,

5

Max 2*X1 + 3*X2 = z s.t. X1 + 2*X2 ≤ 10

X2 ≥ 0

X1 ,

5

X2 = z/3 – 2/3 * X1

0

X2 = z/3 – 2/3 * X1

X1 z=6

X2 ≥

X1

z=15 10

z=20

z=0

4

10

 The optimal LP solution occurs at so-called extreme points of the convex feasibility regions. Extreme points are those which cannot be represented as a convex combination of other points pertaining to the convex set.

Other Extreme Points

Graphical Solution

X2 X2

Optimal solution occurs at extreme point!

5

5

X1

X1 10 z=20

10

2 Constraints

1 Constraint ⇒1 Non-Zero Variable X2 10 X2 X1

=

0

X2 = 5

Max s.t.

Objective X1 + 2*X2 + S = 10 X2 , S ≥

X1 ,

S=0

5

Max Objective s.t. X1 + 2*X2 + S1 2*X1 + X2 +

X1 = 0

X1 = 1 0

X2 = 0

X2=0 S=0

S = 10

X1 ,

0

= 10 S2 = 10

X2 , S1, S2 ≥ 0

5

X1

X1 5

10

10

2 Constraints ⇒ 2 Non-Zero Variables 10 X1

=0

X2

= 5

S1

=0

S2

=5

5

X2

Max Objective s.t. X1 + 2*X2 + S1 = 10 2*X1 + X2 + S2 = 10 X1 , X2 , S1, S2 ≥ 0 X1 = 10

3 10 3 X2 = S1 = 0 S2 = 0

X1

=5

X2

=0

X1 = 0

S1 = 5 =0

X2 = 0 S1 = 10 S2 =

X1

S2

10

5

10

 Each extreme point has no more non-zero variables than it has constraints. Unwanted solutions Unbounded solution The objective variable can reach plus infinity (maximization) or minus infinity (minimization). It is caused by misspecified or missing constraints. In the example below, there is no constraint to the right. Thus, the objective variable value can go to infinity as x1 and x2 go to infinity.

5

Max

x2

s.t.

x1 + 1 − x 2 x

+ 1 1

Objective increases

,

− z = 0

x2 x

x

≤ 0 2

≥ 0

2

x1

Infeasible solution The feasibility region is empty. It is caused by misspecified constraints or an infeasible problem context. Below is an example, where one restrictions forces the two non-negative decision variables x1 and x2 to be in area A but a second restriction causes x1 and x2 to be in area B. Since the two regions are exclusive, the solution space is empty. Max

x2

s.t.

... 5x1 + 1 + − x 2 1 x1 ,

≤ 20

x2

≤ −3

x 2

x2

≥ 0

A B

x1

Multiple optima Multiple optima are possible if the objective function isocline with the highest feasible objective level is parallel to a binding constraint. Any feasible point on this binding constraint then represents an optimal solution. The number of multiple solution is infinite. In the graphical illustration below, an infinite number of alternative solutions exists between P1 and P2.

x2

Isocline with highest objective

Max s.t. P1

10x1 + 10x1 + − 1x + 2 1 x1 ,

x2 x2

≤ 100

x



5



0

2

x2

x1

P2

Degenerate solution A degenerate solution is caused by redundant constraints and their implications. The illustration below demonstrates degeneracy graphically. There are three constraints shown. 6

Suppose, we have a maximization problem, where the optimal solution would occur at the maximum feasible level of x1, namely at point P1. x2

Max

x1

s.t.

x1

− x2 ≤ 100 ≤ 50

x2

1 x + x ≤ 50 2 1 2 x1 , x2 ≥ 0 x1 P1 By removing the vertical constraint (see below), we would not alter the solution.

x Max

x2

s.t.

1

x 2

+ x

1x 2



x1

≤ 100 2

1

,

x2



50

x2



0

x1 P1 Similarly, one could remove the diagonal constraint (see below) without altering the solution.

x2

Max

x1

s.t.

x1



x2 ≤ 100 x2

x1

, x2

≤ ≥

50 0

x1 P1 Graphically, it seems that redundant constraints don’t cause harm. However, degenerate solutions cause zero levels for some basic variables and zero shadow prices for some binding constraints. This may cause an incorrect solution interpretation.

7

Algebraic Representation of LP Constraints Constraints are either  technical constraints, or  convenience / accounting constraints. Constraints can have three different mathematical forms: a) a1 X ≤ b1 b) a 2 X ≥ b2 c) a 3 X = b3 where a1, a2, and a3 are coefficients, b1, b2, and b3 are right hand side constants, and X is a nonnegative variable. Constraint type a) is conventionally used as standard type. Constraint types b) and c) can be easily converted into type a). Particularly, out of constraint of type b), we can form a constraint of type a) by multiplying both sides by –1 yielding − a 2 X ≤ −b2 . Similarly, out of each constraint of type c) we can form two constraints of type a): a 3 X ≤ b3 and − a 3 X ≤ −b3 . Introducing slack variables Suppose we have n non-negative variables X1 through Xn contained in m inequality restrictions (standard type) and one objective function. Max

+ c n Xn

c1 X1 ... a X 11 1 ... ...

s.t

+ a 1n X n

≤ ≤

am1 X 1 ... + a mn X n



X

b1

b m

X

n ≥ 0 , This system can be converted to an equality system by adding a slack variable to all less than or equal constraints and an unrestricted objective function variable Z to the objective function. After this modification we have a total number of n+1+m variables and m+1 equations. 1

Max − Z s.t

+ c 1 X1 a X

+ c n Xn +a X

11

1n

1

a X m1

1

n

+ S1

+ a mn X n

X 1 ,,

Xn,

=

0

= =

b1

+Sm = bm S1 ,

, Sm ≥ 0

Matrix Algebra The text below is copied from Chapter 3 of McCarl and Spreen book. Matrix Formulation of the Linear Programming Problem The matrix version of the basic LP problem can be expressed as in the equations below. Max CX 8

s.t. AX ≤ b X ≥ 0 Here the term CX is maximized where C is an 1xN vector of profit contributions and X is an Nx1 vector of decision variables. This maximization is subject to inequality constraints involving M resources so that A is an MxN matrix giving resource use coefficients by the X's, and b is an Mx1 vector of right hand side or resource endowments. We further constrain X to be non-negative in all elements. It is common to convert the LP inequality system to equalities by adding slack variables. These variables account for the difference between the resource endowment (b) and the use of resources by the variables (AX) at no cost to the objective function. Thus, define S = b - AX as the vector of slack variables. Each slack variable is restricted to be nonnegative thereby insuring that resource use is always less than or equal to the resource endowment. One slack variable is added for each constraint equation. Rewriting the constraints gives AX + IS = b, where I is an MX M identity matrix and S is a Mx1 vector. The slack variables appear in the objective function with zero coefficients. Thus, we add an 1xM vector of zero's to the objective function and conditions constraining the slack variables to be nonnegative. The resultant augmented LP is MAX CX + OS s.t. AX + IS = b X, S ≥ 0. Throughout the rest of this section we redefine the X vector to contain both the original X's and the slacks. Similarly, the new C vector will contain the original C along with the zeros for the slacks, and the new A matrix will contain the original A matrix along with the identity matrix for the slacks. The resultant problem is

X

MAX CX s.t. AX = b ≥ 0

Solving LP's by Matrix Algebra LP theory reveals that a solution to the LP problem will have a set of potentially nonzero variables equal in number to the number of constraints. Such a solution is called a Basic Solution and the associated variables are commonly called Basic Variables. The other variables are set to zero and are called the nonbasic variables. Once the basic variables have been chosen; the X vector may be partitioned into XB, denoting the vector of the basic variables, and XNB, denoting the vector of the nonbasic variables. Subsequently, the problem is partitioned to become MAX CBXB + s.t. BXB + XB ,

CNBXNB ANBXNB XNB 9

=b ≥ 0.

The matrix B is called the Basis Matrix, containing the coefficients of the basic variables as they appear in the constraints. ANB contains the coefficients of the nonbasic variables. Similarly CB and CNB are the objective function coefficients of the basic and nonbasic variables. Now suppose we address the solution of this problem via the simplex method. The simplex solution approach relies on choosing an initial B matrix, and then interactively making improvements. Thus, we need to identify how the solution changes when we change the B matrix. First, let us look at how the basic solution variable values change. If we rewrite the constraint equation as BXB = b - ANBXNB. Setting the nonbasic variables (XNB) to zero gives BXB = b. This equation may be solved by premultiplying both sides by the inverse of the basis matrix (assuming non-singularity) to obtain the solution for the basic variables, B-1BXB = IXB = B-1b

or

XB = B-1 b.

We may also examine what happens when the nonbasic variables are changed from zero. Multiply both sides of the equation including the nonbasic variables by B-1 giving XB = B-1 b - B-1 ANB XNB. This expression gives the values of the basic variables in terms of the basic solution and the nonbasic variables. This is one of the two fundamental equations of LP. Writing the second term of the equation in summation form yields XB = B-1 b -

∑ B-1a j x j j∈NB

where NB gives the set of nonbasic variables and aj the associated column vectors for the nonbasic variables Xj from the original A matrix. This equation shows how the values of the basic variables are altered as the value of nonbasic variables change. Namely, if all but one (Xη ) of the nonbasic variables are left equal to zero then this equation becomes -1

-1

XB = B b - B aη Xη This gives a simultaneous system of equations showing how all of the basic variables are affected by changes in the value of a nonbasic variable. Furthermore, since the basic variables must remain non-negative the solution must satisfy -1

-1

XBi* = (B b)i* - (B aη)i* Xη = 0 This equation permits the derivation of a bound on the maximum amount the nonbasic variable Xη can be changed while the basic variables remain non-negative. Namely, X η may 10

increase until one of the basic variables becomes zero. Suppose that the first element of X B to become zero is XBi*. Solving for XBi* gives XBi* = (B-1b)i* - (B-1 aη )i* Xη

=0

where ( )i denotes the ith element of the vector. Solving for Xη yields Xη =

(B-1b)i*/(B-1ai)i*, where (b-1aη)i*

This shows the value of Xη which causes the i*th basic variable to become zero. Now since Xη must be nonnegative then we need only consider cases in which a basic variable is decreased by increasing the nonbasic variable. This restricts attention to cases where (B-1 aη)i is positive. Thus, to preserve non-negativity of all variables, the maximum value of Xηis Xη = {(B-1b)i/(B-1ai )i } for all i where (B-1a η )i > 0 The procedure is called the minimum ratio rule of linear programming. Given the identification of a nonbasic variable, this rule gives the maximum value the entering variable * can take on. We also know that if i is the row where the minimum is attained then the basic variable in that row will become zero. Consequently, that variable can leave the basis with Xη inserted in its place. Note, if the minimum ratio rule reveals a tie, (i.e., the same minimum ratio occurs in more than one row), then more than one basic variable reaches zero at the same time. In turn, one of the rows where the tie exists is arbitrarily chosen as i* and the new solution has at least one zero basic variable and is degenerate1 . Also, note that if all the coefficients of Xη are zero or negative (B-1 aη)i -- for all i -- then this would indicate an unbounded solution, if increasing the value of the nonbasic variable increases the objective function, since the variable does not decrease the value of any basic variables. Another question is which nonbasic variable should be increased? Resolution of this question requires consideration of the objective function. The objective functio...


Similar Free PDFs