Lecture-01-handout - Lecture notes 1 PDF

Title Lecture-01-handout - Lecture notes 1
Course Optimization Methods
Institution Massachusetts Institute of Technology
Pages 16
File Size 354 KB
File Type PDF
Total Downloads 79
Total Views 146

Summary

Introduction to linear optimization...


Description

6.255/15.093 Optimization Methods Lecture 1: Introduction to Linear Optimization

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093

1 / 32

Structure of Class

Administrivia

Staff Course website Recitations Textbook Important dates

/

/

Structure of Class

Course overview

Linear Optimization (LO): Lec. 1-9 Network Flows: Lec. 10-11 Midterm exam Discrete Optimization: Lec. 12-15 Dynamic Optimization: Lec. 16 Nonlinear Optimization (NLO): Lec. 17-21 Convex and Semidefinite Optimization: Lec. 22-25 Final exam

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093 Structure of Class

3 / 32

Requirements

Requirements

Homework Sets: 30% Midterm Exam: 30% Final Exam: 40% Class Participation: Bonus points Use of MATLAB, Julia, Python, etc. for solving optimization problems

/

/

Structure of Class

Main expectations

Expectations

Understand the essential features of the different classes of optimization methods presented. Identify most suitable optimization approach for a given problem. Interplay of geometric, algebraic, and computational aspects.

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093 Structure of Class

5 / 32

Policy on individual work

Policy on individual work Your assigments and write-ups must represent your own individual work.

You may discuss HW problems with other students. Do not copy (or allow others to copy) your work. You cannot consult or submit work from previous years. You should write solutions on your own.

Any violation of this policy is a serious offense, with suitable consequences (e.g., grade reduction, delay of graduation, or expulsion).

/

/

Lecture Outline

Lecture Outline

What is Optimization? History of Optimization Where does LO Arise? Examples of Formulations

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093

7 / 32

History of Optimization

History of Optimization

Fermat, 1638; Newton, 1670 min f (x)

x: scalar

df (x) =0 dx Euler, 1755 min f (x1 , . . . , xn ) ∇f (x) = 0

/

/

History of Optimization

Lagrange, 1797 min f (x1 , . . . , xn ) s.t. gk (x1 , . . . , xn ) = 0

k = 1, . . . , m

Euler, Lagrange Problems in infinite dimensions, calculus of variations.

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093

History of Optimization

9 / 32

Nonlinear Optimization

Nonlinear optimization

min f (x1 , . . . , xn ) s.t. g1 (x1 , . . . , xn ) ≤ 0 .. . gm (x1 , . . . , xn ) ≤ 0.

/

/

What is Linear Optimization?

Formulation

Linear Optimization I

Objective value and constraints are linear minimize 3x1 + x2 subject to x1 + 2x2 ≥ 2 2x1 + x2 ≥ 3 x1 ≥ 0, x2 ≥ 0

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093

What is Linear Optimization?

11 / 32

Formulation

Linear Optimization II In matrix form: c′ x Ax ≥ b x≥0

minimize subject to where

c=



3 1



,

x=



x1 x2



,

b=

/



2 3



,

A=



1 2 2 1



/

History of LO

The pre-algorithmic period

History of LO

Fourier, 1826 Method for solving system of linear inequalities. de la Vall´ee Poussin simplex-like method for objective function with absolute values. Kantorovich, Koopmans, 1930s Formulations and solution method. von Neumann, 1928 game theory, duality. Farkas, Minkowski, Carath´eodory, 1870-1930 Foundations.

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093 History of LO

13 / 32

The modern period

George Dantzig, 1947 Simplex method. 1950s Applications. 1960s Large Scale Optimization. 1970s Complexity theory. Khachiyan, 1979 The ellipsoid algorithm. Karmarkar, 1984 Interior point algorithms.

/

/

Where do LOPs Arise?

Wide Applicability

Where do LOPs Arise?

All over the place! Some examples: Transportation

Telecommunications

Air traffic control Crew scheduling Manufacturing

Digital circuit design Typesetting (TEX, LATEX) Machine learning

Medicine

Finance

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093 Applications

15 / 32

Transportation - setup

Optimal Transportation

m plants, n warehouses si supply of ith plant, i = 1, . . . , m dj demand of jth warehouse, j = 1, . . . , n cij : cost of transportation i → j

/

/

Applications

Transportation - formulation

xij = number of units to send i → j min s.t.

n m X X

cij xij

i=1 j=1 m X i=1 n X

xij = dj

j = 1, . . . , n

xij = si

i = 1, . . . , m

j=1

xij ≥ 0

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093 Applications

17 / 32

Sorting through LO

Sorting

Given n numbers c1 , c2 , . . . , cn ; The order statistic c(1) , c(2) , . . . , c(n) : c(1) ≤ c(2) ≤ . . . ≤ c(n) ; Pk c(i) . Use LO to find i=1 min s.t.

n X

i=1 n X

ci xi xi = k

i=1

0 ≤ xi ≤ 1

i = 1, . . . , n

/

/

Applications

Investment under taxation

Investment under taxation

You have purchased si shares of stock i at price qi , i = 1, . . . , n. Current price of stock i is pi You expect that the price of stock i one year from now will be ri . You pay a capital-gains tax at the rate of 30% on any capital gains at the time of the sale. You want to raise C amount of cash after taxes. You pay 1% in transaction costs.

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093 Applications

19 / 32

Investment under taxation

Example: You sell 1,000 shares at $50 per share; you have bought them at $30 per share; Net cash is: 50 × 1,000 − 0.30 × (50 − 30) × 1,000− 0.01 × 50 × 1, 000 = $43,500.

/

/

Applications

Investment under taxation

LO Formulation:

max

n X

i=1

s.t.

n X

i=1

ri (si − xi ) n n X X pi xi ≥ C (pi − qi )xi − 0.01 pi xi − 0.30 i=1

i=1

0 ≤ xi ≤ si

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093 Applications

21 / 32

Optimal Investment

Optimal Investment

Five investment choices A, B, C, D, E. A, C, and D are available in 2019. B will be available in 2020. E will be available in 2021. Cash earns 6% per year. No borrowing allowed. $1,000,000 in 2019.

/

/

Applications

Cash Flow per Dollar Invested

Year:

A

B

C

D

E

2019 2020 2021 2022

−1.00 +0.30 +1.00 0

0 −1.00 +0.30 +1.00

−1.00 +1.10 0 0

−1.00 0 0 +1.75

0 0 −1.00 +1.40

$500,000

None

$500,000

None

$750,000

LIMIT

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093 Applications

23 / 32

Decision Variables

A, . . . E: amount invested in $ millions Casht : amount invested in cash in period t, t = 1, 2, 3 max 1.06Cash3 + 1.00B + 1.75D + 1.40E s.t. A + C + D + Cash1 ≤ 1 Cash2 + B ≤ 0.3A + 1.1C + 1.06 Cash1 Cash3 + 1.0E ≤ 1.0A + 0.3B + 1.06 Cash2 A ≤ 0.5, C ≤ 0.5, E ≤ 0.75 A, . . . , E ≥ 0, Casht ≥ 0.

/

/

Applications

Solution

Optimal Solution: A = 0.5M,

B = 0,

Cash1 = 0,

C = 0,

D = 0.5M,

Cash2 = .15M,

E = 0.659M,

Cash3 = 0

Optimal Objective: 1.7976M

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093 Applications

25 / 32

Manufacturing

Manufacturing

n products, m raw materials cj : profit of product j bi : available units of material i. aij : # units of material i product j needs in order to be produced.

/

/

Applications

Formulation

xj = amount of product j produced. max

n X

cj xj

j=1

s.t. a11 x1 + · · · + a1n xn ≤ b1 .. . am1 x1 + · · · + amn xn ≤ bm xj ≥ 0, j = 1...n

Lecture 1: Introduction to Linear Optimizatio

MIT 6.255/15.093 Applications

27 / 32

Separating data sets

Data classification

Task: Given a set of measurements (“features”) of an object (e.g., color, weight, etc.), determine its type We have labeled samples {a1 , a2 , . . . , an } and {b1 , b2 , . . . , bm }, where ai , bj ∈ Rd (training set) Find a (possibly nonlinear) classifier to distinguish the sets.

/

/

Applications

Classification via LO

For simplicity, consider the case d = 2, and a linear classifier. c0 + c1 x1 + c2 x2

Decision variables: c0 , c1 , c2 A perfect classifier must satisfy the inequalities: c0 + c1 ai1 + c2 a2i ≥

1,

j c0 + c1 b1 + c2 b2j ≤ −1,

Lecture 1: Introduction to Linear Optimizatio

i = 1, . . . , n j = 1, . . . , m

MIT 6.255/15.093 Applications

29 / 32

Scheduling

Scheduling

Hospital wants to make weekly nightshift for its nurses dj demand for nurses, j = 1 . . . 7 Every nurse works 5 days in a row Goal: hire minimum number of nurses Decision Variables xj : # nurses starting their week on day j

/

/

Applications

min

7 P

Scheduling - Formulation

xj

j=1

s.t. x1 + x1 + x1 + x1 + x1 +

Lecture 1: Introduction to Linear Optimizatio

x2 x2 + x2 + x2 + x2 +

x3 x3 + x3 + x3 + x3 +

x4 + x5 + x6 + x5 + x6 + x6 + x4 + x4 + x5 x4 + x5 + x6 x4 + x5 + x6 +

MIT 6.255/15.093 Summary

x7

≥ ≥ ≥ ≥ ≥ ≥ ≥

xj

≥ 0

x7 x7 x7 x7

d1 d2 d3 d4 d5 d6 d7

31 / 32

How to formulate LO?

Key Messages

1 2 3

Define decision variables clearly. Write constraints and objective function. No fully systematic methods available.

What is a good LO formulation? A formulation with a small number of variables and constraints, and the matrix A is sparse.

/

/...


Similar Free PDFs