ODE AND PDE Notes I F2018 PDF

Title ODE AND PDE Notes I F2018
Author AJ Sy
Course DIFFERENTIAL EQUATIONS WITH LINEAR ALGEBRA
Institution University of Texas at Austin
Pages 96
File Size 1.2 MB
File Type PDF
Total Downloads 4
Total Views 146

Summary

Download ODE AND PDE Notes I F2018 PDF


Description

MATH 353: ODE AND PDE NOTES I (revised) παιδε´ιαs αρχ´ η oνoµατ ´ ων επ´ισκεψιs (learning begins with what words mean) Stephanos Venakides February 8, 2018

Contents 1 SOME CALCULUS CONCEPTS 1.1

4

Infinitesimals and derivatives . . . . . . . . . . . . . . . . . . . . . . . . . .

4

1.2 Partial fractions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.3 Integration by parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

1.4 Taylor series and their convergence . . . . . . . . . . . . . . . . . . . . . . .

7

2 CONCEPTS FROM LINEAR ALGEBRA 2.1

7

Scalars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

2.2 Vectors and vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

2.3 Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

11

2.4 Linear expressions and linear equations

. . . . . . . . . . . . . . . . . . . .

12

2.5 Eigenvalues and eigenvectors of a linear operator . . . . . . . . . . . . . . . .

13

3 ORDINARY DIFFERENTIAL EQUATIONS (ODE) OF FIRST ORDER 15

1

3.1 What is an ordinary differential equation (or ODE for short)? . . . . . . . .

15

3.2 What is one looking for when “solving a first order ODE”? . . . . . . . . . .

16

3.3 Is the solution of a first order ODE generally given by an explicit formula? Solvable ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

3.4 Separable ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

3.5

Linear ODE: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

3.6

Exact ODE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

3.6.1

Integrating factor for achieving exactness . . . . . . . . . . . . . . . .

22

3.7 Homogeneous ODE (do not confuse with “linear homogeneous”) . . . . . . .

23

3.8 Method of substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

3.9 Mathematical Modelling with Differential Equations . . . . . . . . . . . . . .

24

3.10 The initial value problem (IVP) for an ODE . . . . . . . . . . . . . . . . . .

27

3.11 Approximate solutions of an IVP . . . . . . . . . . . . . . . . . . . . . . . .

28

3.12 Main theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

3.13 Bifurcation

31

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 AUTONOMOUS FIRST ORDER ODE SYSTEMS

32

5 LINEAR SECOND ORDER ODE

34

5.1 Review of complex numbers and hyperbolic functions . . . . . . . . . . . . .

34

5.2 Linear homogeneous 2nd order ODE with constant coefficients . . . . . . . .

36

5.2.1

. . . . . . . . . . . . . . . . . . . . . . . . . . .

38

Forced Harmonic Oscillator and Resonance . . . . . . . . . . . . . .

40

5.3 Linear operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

5.4 Linear equations and the principle of superposition of solutions . . . . . . . .

44

5.5 What is a linear second order ODE . . . . . . . . . . . . . . . . . . . . . . .

45

5.6 Second order Linear ODE with Variable Coefficients . . . . . . . . . . . . . .

47

5.7 Higher order linear ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49

5.2.2

Mass-spring system:

2

6 BOUNDARY VALUE PROBLEMS FOR LINEAR SECOND ORDER ODE

50

6.1 Periodic boundary condition

. . . . . . . . . . . . . . . . . . . . . . . . . .

53

6.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

7 SERIES SOLUTIONS OF ODE

54

7.1 Sequences, series and convergence . . . . . . . . . . . . . . . . . . . . . . . .

54

7.2 Using Power Series to solve ODE . . . . . . . . . . . . . . . . . . . . . . . .

59

7.2.1

Ordinary points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

7.2.2

Introduction to regular singular points. . . . . . . . . . . . . . . . . .

63

7.3 Asymptotic series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

8 INITIAL VALUE PROBLEMS: THE LAPLACE TRANSFORM

65

8.1 Definition and properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

8.2 Use of the transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

67

9 VECTOR SPACE BASICS

71

9.1 Basic definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

9.2 The inner product of two vectors . . . . . . . . . . . . . . . . . . . . . . . .

74

9.3 Subspaces and their orthogonal complement . . . . . . . . . . . . . . . . . .

76

10 REVIEW OF MATRICES

78

10.1 Matrix Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

10.1.1 Observations on the product AB = C . . . . . . . . . . . . . . . . . .

79

10.2 Four basic subspaces and their relation . . . . . . . . . . . . . . . . . . . . .

81

10.3 The equation Mx=b . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

10.3.1 Square n × n matrices. Problem:

Ax = b, Find x. . . . . . . . . .

85

10.4 The eigenvalue problem for a square matrix L . . . . . . . . . . . . . . . . .

86

10.5 Review of determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

3

11 LINEAR SUPERPOSITION IN AN INFINITE DIMENSIONAL VECTOR SPACE

93

11.1 Linear superposition of infinitely many functions or with respect to a continuous parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

11.1.1 An example of an integral operator as the solution operator of an evolution process (optional) . . . . . . . . . . . . . . . . . . . . . . .

1

94

SOME CALCULUS CONCEPTS

1.1

Infinitesimals and derivatives

The modern definition of the derivative of a function F (x) at a point x0 is

F ′ (x0 ) = lim

x→x0

F (x) − F (x0 ) ∆F , = lim x→x0 ∆x x − x0

where ∆x = x − x0 is the run of the independent variable x and ∆F = F (x) − F (x0 ) is the rise of the function F . It is intuitively tempting to by-pass the concept of the limit by assuming that • the rise and the run can be taken to be infinitesimally small, which means having smaller magnitude than any nonzero number. To distinguish the finite from the corresponding infinitesimal quantities one then uses the notation dF for the (infinitesimal) rise and dx for the (infinitesimal) run: One then can • express the derivative as a straight quotient of two infinitesimals F ′ (x) =

4

dF , dx

with no limit involved. This simplification is quite useful for many calculations, yet strictly speaking, it is not correct (infinitesimal quantities really do not exist, at least in the naive form introduced here). Nevertheless, the simplification is helpful as long as one does not overstep its limitations. Thus, millions of students learn that the derivative (i.e. the slope) equals the ratio of rise over run. slope =

rise . run

This equation defines what “slope” means, in other words, it gives the rate of “rise over run” a name. Once, however, the concept of slope is understood, it is much more helpful to give the relation as a product,

rise=slope× run,

dF = F ′ dx

because, in this way, the relation is generalized to multivariate functions. Take for example the function of two variables F (x, y). Its run is the vector (dx, dy). • The slope in the x direction i.e. for run (dx, 0), is the partial derivative Fx =

∂F ∂x

and

the corresponding rise, according to the previous formula, is the product Fx dx. • The slope in the y direction i.e. for run (0, dy) is the partial derivative Fy =

∂F . ∂y

Again,

according to the previous formula, the corresponding rise is the product Fy dy. • The overall rise of F is the sum of the rises in the two directions. dF = Fx dx + Fy dy.

One can verify this in the case of a function whose graph is a plane, F (x, y) = ax+by+c, using exact (not infinitesimal) run and rise. The fact that the formula applies to all differentiable functions is because the plane tangent to the graph of a differentiable function at a point (x, y) approximates the graph near this point better than any other plane going through (x, y). 5

• We can write the formula for the rise in a way that it mirrors the formula for a singlevariable, i.e. rise=slope × run, only now both the slope (gradient ∇F = (Fx , Fy )) and the run dx = (dx, dy) are vectors and their product is the dot product

dF = Fx dx + Fy dy = (Fx , Fy ) · (dx, dy) = ∇F · dx How is this formula for the rise useful to us? Imagine that you need to find a relation between the variables x and y, given that the relation dy x2 + y = 2 y −x dx holds (this is a differential equation). By cross-multiplying, one can re-express this as (x2 + y)dx + (x − y 2 )dy = 0 One can then make the crucial observation that x2 + y is the partial derivative with respect to x of the function F (x, y) = 31(x3 − y 3 ) + xy and x− y 2 is the partial derivative with respect to y of the same function. Thus, the differential equation is exactly

Fx dx + Fy dy = 0.

According to the formula on the (infinitesimal) rise of F , this implies dF = 0, thus F is a constant.The sought out relation is

1 (x3 3

1.2

− y 3 ) + xy = constant.

Partial fractions

Do careful review

6

1.3

Integration by parts

Do careful review

1.4

Taylor series and their convergence

Do careful review

2

CONCEPTS FROM LINEAR ALGEBRA

2.1

Scalars

For this course, the set of scalars will be either the set of real numbers R or the set of complex numbers C. When we use the word “scalar”, it must be clear (either said or implied) which of the two sets we have in mind.

2.2

Vectors and vector spaces

Important take-home-message: In addition to the well known vectors of linear algebra, polynomials p(x) are vectors, functions f (x) are vectors, and so are polynomials and functions of more variables. Even matrices may be thought of as vectors. · · · On the other hand, triangles are not vectors, fire-engines are not vectors. So, in the end, what is a vector? Imagine a set of elements that can be added to each other and can be multiplied by a scalar. The ususal properties of addition and multiplication (associative, commutative, distributive) must hold. Also, for every element f of the set, 1f = f . Imagine, further, that the set is closed under these two operations. This means that (1) the sum of any two elements of the set is an element of the set and (2) any element of the set times a scalar is an element of the set. A set that has these properties is a vector space and its elements are

7

vectors. If the scalars in question are real, we talk of a real vector space; if the scalars are complex, we talk of a complex vector space. Thus, • R2 , R3 , · · · , Rn are real vector spaces. • The set of polynomials of x of degree less than a given integer is a real vector space if the coefficients are real; it is a complex vector space if the coefficients are complex. • The set of real-valued functions f (x) where a ≤ x ≤ b is a real vector space. • The set of all continuous, real valued functions defined on the interval a ≤ x ≤ b, for which f (a) = 0 and f (b) = 0, hold is a real vector space (why?). On the other hand, if f (a) = 0 is replaced by f (a) = 1, the set is not a vector space (why?). • The set of power series

P∞

n=0

an xn with finitely many nonzero complex coefficients an

is a complex vector space.

• The set of formal (means possibly non-convergent) power series C, is a complex vector space.

P∞

n=0

an xn , with an ∈

• The set of pairs (x, y) in R2 , with y > 0, is not a vector space (why?). • The set of vectors of Rn that have magnitude less than unity is not a vector space (why?). A linear superposition or linear combination of two vectors f1 and f2 is the vector c1 f1 + c2 f2 where c1 and c2 are arbitrary scalars. If f1 and f2 are linearly independent (one is not a scalar multiple of the other), the set of all linear superpositions of the vectors f1 and f2 is a two-parameter family of vectors. In general, c1 f1 + c2 f2 + · · · + cn fn is a linear superposition of the n vectors f1 , f2 · · · , fn . Here the index n can equal any natural number, 1, 2, 3. · · · (thus a single vector is also considered as a linear superposition). If these vectors are linearly independent (none is a linear superposition of the others), their 8

linear superposition constitutes an n-parameter family of vectors (explain!). If they are linearly dependent, the set of all their linear superpositions constitutes a k-parameter family of vectors where k < n (explain!). Two linear superpositions of vectors from a vector space are identical to each other, if the coefficients of the same vector in the two superpositions are equal to each other. Otherwise, they are distinct. For example, the superpositions 2f1 + 3f2 + f5 and f2 + 5f4 are certainly distinct. The superposition in which all coefficients equal zero is called the trivial superposition. The concept of linear superposition of vectors allows us to define the important concepts of span of a set of vectors, basis of a vector space and dimension of a vector space: • Consider the set A = {f1 , f2 · · · , fn }, of vectors of some vector space, where n is some natural number. We refer to the set S of all linear superpositions of the vectors of A as the span of A or as the span of the vectors f1 , f2 · · · , fn . We also say that the set of vectors A spans the set S. We say that the spanning occurs uniquely, if two distinct linear superpositions of A result in two distinct vectors of F . In this case, the spanning set A cannot contain the zero vector (why?) Also, in this case, (by tautology), if c1 f1 + c2 f2 + · · · + cn fn = c1′ f1 + c2′ f2 + · · · + cn′ fn , then c1 = c′1 , c2 = c2′ , · · · , cn = c′n . The spanning is unique if and only if the vectors of the set A are linearly independent (why?). The span of A is itself a vector space, (a subspace of the original vector space) (why?). • A set of vectors, that (1) spans a vector space and (2) does so uniquely, is called a basis of the vector space. By the previous paragraph, a basis is an linearly independent set of vectors. It is a theorem of linear algebra that if a vector space has a basis of finitely

9

many vectors, then all its bases have the same number of vectors. This number is called the dimension of the vector space and such vector spaces are called finitedimensional. We will encounter them as solution sets of linear homogeneous ODE (ordinary differential equations). We will work with infinite dimensional vector spaces in our study of linear PDE (partial differential equations). • We will encounter the following important challenge: Given a function f (x), and a basis with an infinite number of basis functions {ψ(x)}∞ n=0 , calculate the scalar coefficients cn in the expansion of the function as the linear superposition of the basis functions,

f (x) = c0 ψ0 (x) + c1 ψ1 (x) + · · · + cn ψn (x) + · · · We will utilize the dot product of two functions, defined later in the course in similarity to the dot product introduced in linear algebra, and we will proceed as follows. We treat the case in which the basis is orthogonal basis i.e. every basis function is orthogonal to every other basis function or equivalently the inner product of any two distinct basis functions is equal to zero. Our goal is to find a formula that gives the value of ck for arbitrary index k. In order to achieve this, we first dot both sides with the basis function ψk ,

f · ψk = (c0 ψ0 + c1 ψ1 + · · · + cn ψn + · · · ) · ψk Applying the distributive law we obtain

f · ψk = c0 (ψ0 · ψk ) + c1 (ψ1 · ψk ) + · · · + cn (ψn · ψk ) + · · · Since the basis is orthogonal, all the dot products except ψk · ψk equal zero. Thus, f · ψk = ck (ψk · ψk ) 10

and the desired formula is simply,

ck =

f · ψk . ψk · ψk

Notice that the role of the index n in the series for f is similar to the role of the integration variable in a definite integral. It is summed over and it does not appear in the answer. We call it a dummy index in similarity to the integration variable in a definite integral, which is referred to as a dummy variable.

2.3

Operators

An operator is an input-output box in which each input produces exactly one output. The set of all inputs is the domain of the operator. The set of outputs (there is some input in the domain that results to each output) is its range. Two distinct inputs may result to the same output; an operator is thus not necessarily invertible. An operator L is linear if 1. Its inputs and its outputs are vectors, not necessarily from the same vector space. 2. (a) Operating on the sum of two vectors, i.e. L(f1 + f2 ), gives the same result as operating on each vector separately and summing the two outputs, i.e. Lf1 + Lf2 . Thus, the condition L(f1 + f2 ) = Lf1 + Lf2 , (b) Operating on a vector multiplied by a scalar, i.e. L(cf ) gives the same result as operating on a vector first and then multiplying by the scalar i.e. cLf . Thus, the condition L(cf ) = cLf The two conditions together stipulate that applying the operator satisfies the

11

distributive law. L(c1 f1 + c2 f2 ) = c1 Lf1 + c2 Lf2 , Examples of linear operators: 1. A matrix M that operates on column vectors through matrix multiplication. Input is a vector v, output is Mv. The operator is linear since, by matrix multiplication,

M (c1 v1 + c2 v2 ) = c1 M v1 + c2 Mv2 .

2. The operator

d dx

of taking the derivative of a function of the variable x. df dg d (f + g) = + , dx dx dx

3. The second order differential operator L =

d2 dx2

d df (cf ) = c . dx dx d + p(x) dx + q(x) where,

Lf = f ′′ + p(x)f ′ + q(x).

2.4

Linear expressions and linear equations

• An expression is linear in the variable y if the expression can be brought to the form Ly


Similar Free PDFs