Exam 2 April 2018, questions and answers PDF

Title Exam 2 April 2018, questions and answers
Course Introduction to Optimization
Institution University of Waterloo
Pages 9
File Size 143.1 KB
File Type PDF
Total Downloads 883
Total Views 935

Summary

co250 Introduction to Optimization, Fall 2018 Midterm Exam 1 Solutions1 , Sections:001, 002 1 For each j ∈ {1, 2, . . . , n}, let xj denote the number of tons of pasta j to produce per month. Let x ∈ Rn denote the vector variable with j th entry equal to xj for all j ∈ {1, . . . , n}. In matrix-vect...


Description

co250 Introduction to Optimization, Fall 2018 Midterm Exam 1 Solutions1 , Sections:001, 002

1 For each j ∈ {1, 2, . . . , n}, let xj denote the number of tons of pasta j to produce per month. Let x ∈ Rn denote the vector variable with j th entry equal to xj for all j ∈ {1, . . . , n}. In matrix-vector notation, the given production planning problem can be modelled by the Linear Program max subject to

c⊤ x Ax ≤ b x ≥ 0.

1

COPYRIGHT: Reproduction, sharing or online posting of this document is strictly forbidden ©University of

Waterloo, L. Tun¸cel.

1

#2 (a) A function f : Rm → R is a linear function of variables y1 , y2 , . . . , ym if f (y) = a⊤ y for some a ∈ Rm .

(b) A function f : Rm → R is an affine function of variables y1 , y2 , . . . , ym if f (y) = a⊤ y + β for some a ∈ Rm and β ∈ R.

(c) A linear constraint for variables y1 , y2 , . . . , ym is a constraint of one of the following forms: f (y) ≤ β

or f (y) ≥ β

or f (y) = β,

where f : Rm → R is a linear function and β ∈ R.

Comments: Equivalently, we could also say: A function f : Rm → R is a linear function of variables y1 , y2 , . . . , ym if we can express f as P f (y1 , y2 , . . . , ym ) = m i=1 ai yi for some a1 , a2 , .., am ∈ R. A function f : Rm → R is an affine function of variables y1 , y2 , . . . , ym if we can express f as f (y1 , y2 , . . . , ym ) = g(y1 , y2 , . . . , ym ) + c for some linear function g : Rm → R and some constant c ∈ R.

2

#3 We add the two constraints x1 + x2 + x3 ≤ 2(y1 + y2 + y3 ) and y1 + y2 + y3 ≤ 2(x1 + x2 + x3 ). To show this formulation is correct, we note that the total amount of pasta produced (in kilograms) is exactly x1 + x2 + x3 and the total amount of rice produced (in kilograms) is exactly y1 + y2 + y3 . Thus the ratio of total rice produced to total pasta produced is exactly y1 + y2 + y3 x1 + x2 + x3 and the constraint we are asked to model can be written as y1 + y2 + y3 1 ≤ 2. ≤ 2 x1 + x2 + x3 This is nonlinear, but we can fix this by multiplying through by the denominator when (x1 + x2 + x3 ) > 0, and obtaining the constraints   1 (x1 + x2 + x3 ) ≤ y1 + y2 + y3 ≤ 2 (x1 + x2 + x3 ) . 2 So far, we established that our model is correct as long as (x1 +x2 +x3 ) > 0. When (x1 +x2 +x3 ) = 0, our linear constraints imply, y1 = y2 = y3 = 0 (as desired, and as discussed in lectures under the title ”Ratios, Mixtures, and Percentages” ). Therefore, our mathematical model as an LP is correct.

Comments: Some students did not handle the case (x1 + x2 + x3 ) = 0 properly. This seems to be the result of not attending or following lectures.

3

#4 (a) This is an IP because the objective function is a straints are

in variables y1 , . . . , ym , the con-

in variables y1 , . . . , ym , x1 , . . . , xn and variable y1 is restricted to take .

(b) This is an NLP because variable xi is multiplied by variable xj in each constraint and so the .

(c) This is an NLP because the constraints contain an .

4

operator and are therefore

#5 (a) The statement is true. Suppose without loss of generality that the LP is in standard equality form max c⊤ x

subject to Ax ≤ b, x ≥ 0.

We will show that if the feasible region is bounded then the LP must be bounded. If the feasible region is bounded then there exists M > 0 such that −M 1l ≤ x ≤ M 1l for all feasible x. So, for any feasible x

 m m m m  X X X X   ⊤ ⊤ |ci |. |c ||M | ≤ | M | |c ||x | ≤ ≤ c i xi  c x ≤ |c x| =  i i i   i=1

i=1

i=1

i=1

It follows that the LP problem is not unbounded since any feasible solution has objective value as P most |M | m i=1 |c i |.

(b) The statement is true. Note that for c = 0, any feasible solution to the LP is optimal. Since {x ∈ Rn : Ax = b, x ≥ 0} is unbounded, it is also nonempty and so there exists x ˆ such that Aˆ x = b, x ˆ ≥ 0. So, for c = 0, x ˆ is a feasible and therefore an optimal solution to the LP.

(c) The statement is true. Denote the matrix A and the right hand side b. Label the columns of A from 1 to 12. Consider B = {1, 3, 11, 12}. Then AB is invertible (it has determinant 2) and the unique solution to AB xB = b is x ¯B . Also x ¯N = 0. Therefore, x ¯ is a basic solution for basis B.

Comments: In part (c), we show that there is a basis B of A such that x ¯ is the bfs determined by B. One way to look at the ingredients of such an argument is • check that A¯ x = b (therefore x ¯ is a solution of Ax = b), • check that {Aj : x ¯j 6= 0} is linearly independent • check that rank(A) = m.

5

#6 (a) We pick f : R → R given by f (x) = x(1 − x) and the resulting constraints are xj (1 − xj ) = 0 ∀j ∈ {1, 2, . . . , n}.

(b) We pick f : R → R given by f (x) = x2 (1 − x)2 and the resulting constraints are xj2 (1 − xj )2 ≤ 0 ∀j ∈ {1, 2, . . . , n}.

(c) We pick f : R → R give by f (x) = sin(πx) and the resulting constraint is sin(πx1 ) = 0.

(d) We pick f : R → R give by f (x) = sin2 (πx) and the resulting constraint is sin2 (πx1 ) ≤ 0.

Comments: There are many other options for example for part (d), we could have picked f : R → R given by f (x) = 1 + cos(π + 2πx) and the resulting constraint is 1 + cos(π + 2πx1 ) ≤ 0.

6

#7 (a) This is not an IP, so we formulate it as an one. max subject to

c⊤ x xj = 2zj − 1

∀j ∈ {1, . . . , n}

0 ≤ zj ≤ 1

∀j ∈ {1, . . . , n}

zj ∈ Z

∀j ∈ {1, . . . , n}.

(b) Add the constraint d ⊤ x ≤ n − 1.

Comments: Note that the optimization problem in part (a) is not an IP. As we explained in class, the only constraints, other than the linear constraints, that are allowed in IPs are: • xj ∈ Z (or in vector version x ∈ Zn ) • “xj is integer” or “xj integer” • xj ∈ {0, 1} (or in vector version x ∈ {0, 1}n )

7

#8 (a) If we were allowed to add m constraints, we would simply add [−A]x ≤ [−b] to the given constraints. To add only one more linear constraint, we will combine these m constraints into one constraint, by summing them together. So, the one constraint to add is −1l⊤ Ax ≤ −1l⊤ b. Let s := b − Ax. Pn Then our new constraint says j=1 sj ≤ 0. Since Ax ≤ b iff s ≥ 0, we conclude Ax = b

(b)

⇐⇒

 

Ax ≤ b,

 −1l⊤ Ax ≤ −1l⊤ b.

From graph H, construct graph G = (V, E) by adjoining a clique C = (W, D) of size |U | and adding all edges between U and W . (Recall: a clique of size C has |C| vertices and every pair of vertices is joined by an edge.) In others words, our new graph G has vertex set V = U ∪ W and edge set E = F ∪ W ∪ {xy : x ∈ F, y ∈ W }. Assign costs c ∈ RE to the edges of G as follows: all edges f ∈ F have the original cost in H given by wf and all other edges have cost 0. Provide as input to the min cost perfect matching algorithm our graph G and described cost vector c. Note that the algorithm will always output a perfect matching and never output “infeasible” since matching each vertex of H to a vertex of C yields a perfect matching in G. Let M ⊆ E be the output of our min cost perfect matching algorithm on G. We claim that M ′ := M ∩ F is a minimum cost matching for H . We now argue this construction works. It is clear M ′ is a matching in H. Suppose for a contradiction that M ′ is not a minimum cost matching in H. Then there must exists some matching M ” of H P P such that e∈M ” we < e∈M ′ we . Let A be the vertices in H that are not matched by M ′′ . We

can extend M ” to a perfect matching in G by matching A to a set B ⊆ W of |A| nodes of C and then matching the remaining nodes of W \ B. This can indeed be done because |U + V | is even so

we have an even number of nodes left to match up in our extension. Call this new matching N . P P P P Note that e∈N ce = e∈M ” we < e∈M ′ we = e∈M ce . Thus N is a perfect matching in G that is cheaper than M ; this contradicts the fact M was a minimum cost perfect matching of G. We conclude that M ′ is a minimum cost matching in H .

8

Comments: Below we give a much more detailed explanation for the proof for part (a). We can let P := {x ∈ Rn : Ax = b, x ≥ 0} and let Q := {x ∈ Rn : Ax ≤ b, −1l⊤ Ax ≤ −1l⊤ b, x ≥ 0}. We will show that P = Q and so our formulation in SIF is correct. P ⊆ Q: Let x ˆ ∈ P . Then x ˆ clearly satisfies Aˆ x ≤ b, x ˆ ≥ 0 and −1l⊤ Ax ˆ = −1l⊤ b ≤ −1l⊤ b and so x ˆ ∈ Q. Q ⊆ P : Let x ˜ ∈ Q. Then x ˜ clearly satisfies x ˜ ≥ 0. Suppose for a contraction that A˜ x 6= b. Since Pn we know A˜ x ≤ b, there exists k ∈ [m] such that j=1 akj x ˜ < bk . Assume without loss of generality that k = 1. Then

1l⊤ Ax ˜=

m X i=1

=

n X

  n X  aij x ˜ j=1

a1j x ˜+

i=2

j=1

m

< b1 +

m X

X i=2

 

n

X j=1

 

n X j=1



aij x ˜



by assumption

aij x ˜

≤ b1 + b2 + . . . + bm

since A˜ x≤b

= 1l⊤ b. Multiplying by −1, we get −1l⊤ Ax ˜ > −1l⊤ b, contradicting the fact x ˜ ∈ Q. We conclude that A˜ x=b and therefore x ˜ ∈ P.

9...


Similar Free PDFs