6.7: Inner Product Spaces
All of the geo. props we discussed for IRn (length, distance, orthogonality), came to us from the dot product.
Is there something like the dot product, but for other types of vector spaces? Yes- It’s called an inner
product.
Definition: Let V be a vector space, and let u, v, w be vectors in V and c, d be any scalars. Then an inner
product is a function from V × V to IR and denoted by:
⟨u, v⟩
with the following properties:
⟨u, v⟩ = ⟨v, u⟩
⟨u + v, w⟩ = ⟨u, w⟩ + ⟨v, w⟩
⟨cu, v⟩ = c⟨u, v⟩
⟨u, u⟩ ≥ 0, and ⟨u, u⟩ = 0 iff u = 0.
Example 1
Let u, v be vectors in IR2 , and define
⟨u, v⟩ = 4u1 v1 + 5u2 v2
(We might think of this as a “weighted dot product”). Show that this is an inner product.
Example 2
Let t0 , t1 , . . . , tn be n + 1 distinct numbers. For vectors p, q ∈ Pn , define the inner product by first evaluating
p, q at the n + 1 points (so that we have two vectors in IRn+1 . Then define:
⟨p, q⟩ = p⃗ · ⃗q
Then items 1-3 are straightforward. What about (4)? Well, first note that
⟨p, p⟩ = (p(t0 ))2 + (p(t1 ))2 + . . . + (p(tn ))2 ≥ 0
Note that if ⟨p, p⟩ = 0, then p(ti ) = 0 for i = 0, 1, . . . , n + 1. A polynomial of degree ≤ n that is zero at n + 1
points must be the zero function.
Example 3: C[a, b]
Let f, g ∈ C[a, b]. Show that the following function defines an inner product:
b
Z
⟨f, g⟩ =
f (t)g(t) dt
a
Items 1-3 again are easy to show. What about (4)? In particular, think about this- If the integral of f 2 (t)
is zero, does that mean that f (t) = 0? (Yes, as long as f is continuous)
1 Geometry of an Inner Product Space
Once a vector space is given an inner product, then we can define length, distance and angle just as before:
p
∥f ∥ = ⟨f, f ⟩
dist(f, g) = ∥f − g∥
And finally, we define θ, the angle between f and g as the angle satisfying:
cos(θ) =
⟨f, g⟩
∥f ∥∥g∥
Therefore, we also say that f, g are orthogonal (with respect to the given inner product) if ⟨f, g⟩ = 0.
Two Inequalities
There are also two important inequalities that inner products must satisfy:
The Cauchy-Schwartz inequality:
|⟨u, v⟩| ≤ ∥u∥ ∥v∥
(You can see how this might stem directly from our definition of θ)
The triangle inequality:
∥f + g∥2 ≤ ∥f ∥2 + ∥g∥2
For a proof, recall that
⟨f + g, f + g⟩ = ⟨f, f ⟩ + 2⟨f, g⟩ + ⟨g, g⟩
Then use the Cauchy-Schwartz inequality on the middle term.
Applications
Now we can perform projections. For example, given that the inner product on C[0, 1] =
would we project f (t) = t onto g(t) = 1 + t2 ?
⟨f, g⟩
g(t)
Projg(t) (f (t)) =
⟨g, g⟩
so we would need to compute:
Z
⟨f, g⟩ =
0
1
Z
3
t(1 + t ) dt =
4
2
⟨g, g⟩ =
1
(1 + t2 )2 dt =
0
so that simplifying we get
45
3/4
(1 + t2 ) =
(1 + t2 )
28/15
112
2
28
15
R1
0
f (t)g(t) dt, how A Basis for Functions
We have defined that an analytic function is any function that is equal to its Taylor series (based at x0 ).
This means that the set of functions
1, (x − x0 ), (x − x0 )2 , (x − x0 )3 , · · ·
will form a basis for the space of such functions. Similarly, the set of monomials forms a basis for C[−1, 1],
and if we further set the inner product as:
Z 1
⟨f, g⟩ =
f (t)g(t) dt
−1
then we can construct a set of orthogonal polynomials. So let’s do that using Gram-Schmidt (without
normalization):
P0 (x) = 1
Then
P1 (x) = x − Proj1 (x) = x −
⟨x, 1⟩
1=x−0=x
⟨1, 1⟩
Now,
P2 (x) = x2 − Projx (x2 ) − Proj1 (x2 ) = x2 −
⟨x2 , x⟩
⟨x2 , 1⟩
x−
1
⟨x, x⟩
⟨1, 1⟩
where
⟨1, 1⟩ = 2
⟨x, x2 ⟩ = 0
so that
P2 (x) = x2 −
⟨x2 , 1⟩ =
2
3
1
3
and so on...
The set of these polynomials is called the Legendre polynomials.
As a side note, the Laguerre polynomials are a set of polynomials that are orthogonal on [0, ∞) with the
inner product
Z ∞
⟨f, g⟩ =
f (t)g(t)e−t dt
0
3