Main content

### Course: Multivariable calculus > Unit 3

Lesson 4: Optimizing multivariable functions (articles)# Reasoning behind second partial derivative test

For those of you who want to see why the second partial derivative works, I cover a sketch of a proof here.

## Background

In the last article, I gave the statement of the second partial derivative test, but I only gave a loose intuition for why it's true. This article is for those who want to dig a bit more into the math, but it is not strictly necessary if you just want to apply the second partial derivative test.

## What we're building to

- To test whether a stable point of a multivariable function is a local minimum/maximum, take a look at the quadratic approximation of the function at that point. It is easier to analyze whether this quadratic approximation has maximum/minimum.
- For two-variable functions, this boils down to studying expression that look like this:These are known as
**quadratic forms**. The rule for when a quadratic form is always positive or always negative translates directly to the second partial derivative test.

## Single variable case via quadratic approximation

First, I'd like to walk through the formal reasoning behind why the

*single-variable*second derivative test works. By formal, I mean capturing the idea of concavity into more of an airtight argument.In single-variable calculus, when ${f}^{\prime}(a)=0$ for some function $f$ and some input $a$ , here's what the second derivative test looks like:

has a local maximum at$f$ if$a$ ${f}^{\u2033}(a)<0$ has a local minimum at$f$ if$a$ ${f}^{\u2033}(a)>0$ - If
, the second derivative alone cannot determine whether${f}^{\u2033}(a)=0$ has a maximum, minimum or inflection point at$f$ .$a$

To think about why this test works, start by approximating the function with a taylor polynomial out to the quadratic term, also known as a quadratic approximation.

Since ${f}^{\prime}(a)=0$ , this quadratic approximation simplifies like this:

Notice, $(x-a{)}^{2}\ge 0$ for all possible $x$ since squares are always positive or zero. That simple fact tells us everything we need to know! Why?

It means that when ${f}^{\u2033}(a)>0$ , we can read our approximation like this:

Therefore $a$ is a $a$ .

**local minimum**of our approximation. In fact, it is a global minimum, but we only care about the fact that it is a local minimum. When the quadratic approximation of a function has a local minimum at the point of approximation, the function itself must also have a local minimum there. I'll say more on this in the last section, but for now the intuition should be clear since the function and its approximation "hug" one another around the point of approximationSimilarly, if ${f}^{\u2033}(a)<0$ , we can read the approximation as

In this case, the approximation has a $x=a$ , indicating that the function itself also has a local maximum there.

**local maximum**atWhen ${f}^{\u2033}(a)=0$ , our quadratic approximation always equals the constant $f(a)$ , meaning our function is in some sense too flat to be analyzed by the second derivative alone.

**What to take away from this**:

When ${f}^{\prime}(a)=0$ , studying whether $f$ has a local maximum or minimum at $a$ comes down to whether the quadratic term of the Taylor approximation $\frac{1}{2}}{f}^{\u2033}(a)(x-a{)}^{2$ is always positive or always negative.

## Two variable case, visual warmup

Now suppose you have a function $f(x,y)$ with two inputs and one output, and you find a stable point. That is, a point where both its partial derivatives are $0$ ,

which is more succinctly written as

In order to determine whether this is a local maximum, local minimum, or neither, we look to it's quadratic approximation. Let's start with a visual preview of what we want to do:

will have a local minimum at a stable point$f$ if the quadratic approximation at that point is a concave-up paraboloid.$({x}_{0},{y}_{0})$ will have local maximum there if the quadratic approximation is a concave down paraboloid:$f$ - If the quadratic approximation is saddle-shaped,
has neither a maximum nor a minimum, but a saddle point.$f$ - If the quadratic approximation is flat in one or all directions, we do not have enough information to make conclusions about
.$f$

## Analyzing the quadratic approximation

The formula for the quadratic approximation of $f$ , in vector form, looks like this:

Since we care about points where the gradient is zero, we can get rid of that gradient term

To see this spelled out for the two-variable case, let's expand out the Hessian term,

(Note, if this approximation or any of the notation feels shaky or unfamiliar, consider reviewing the article on quadratic approximations).

As I showed with the single variable case, the strategy is to study if the quadratic term of this approximation is always positive or always negative.

Right now, this term is a lot to write down, but we can distill its essence by studying expressions of the following form:

Such expressions are often fancifully called "

**quadratic forms**".- The word "quadratic" indicates that the terms are of order two, meaning they involve the product of two variables.
- The word "form" always threw me off here, and it makes the idea of a quadratic form sound more complicated than it really is. Mathematicians say "quadratic form" instead of "quadratic expression" to emphasize that
*all*terms are of order , and there are no linear or constant terms mucking up the expression. A phrase like "purely quadratic expression" would have been much too reasonable and understandable to adopt.$2$

To make the notation for quadratic forms easier to generalize into higher dimensions, they are often written with respect to a symmetric matrix $M$

**Here is the crucial question**:

- How can we tell whether the expression
is always positive, always negative, or neither, just by analyzing the constants${a}{x}^{2}+2{b}xy+{c}{y}^{2}$ ,${a}$ and${b}$ ?${c}$

## Analyzing quadratic forms

If we plug in a constant value ${y}_{0}$ for $y$ , we get some single variable quadratic function:

The graph of this function is a parabola, and it will only cross the $x$ -axis if this quadratic function has real roots.

Otherwise, it either stays entirely positive or entirely negative, depending on the sign of ${a}$ .

We can apply the quadratic formula to this expression to see whether it's roots are real or complex.

- The leading term is
.${a}$ - The linear term is
.$2{b}{y}_{0}$ - The constant term is
${c}{y}_{0}^{2}$

Applying the quadratic formula looks like this:

If ${y}_{0}=0$ , the quadratic has a double root at $x=0$ , meaning the parabola barely kisses the $x$ -axis at that point. Otherwise, whether or not these roots are real depends ${{b}}^{2}-{a}{c}$ .

*only*on the sign of the expression- If
, there are real roots, so the graph of${{b}}^{2}-{a}{c}\ge 0$ crosses the${a}{x}^{2}+2{b}x{y}_{0}+{c}({y}_{0}{)}^{2}$ -axis.$x$ - Otherwise, if
, there are no real roots, so the graph of${{b}}^{2}-{a}{c}<0$ either stays entirely positive or entirely negative.${a}{x}^{2}+2{b}x{y}_{0}+{c}({y}_{0}{)}^{2}$

For example, consider the case

${a}=1$ ${b}=3$ ${c}=5$

In this case, ${{b}}^{2}-{a}{c}={{3}}^{2}-({1})({5})=4>0$ , so the graph of $f(x)={x}^{2}+6x{y}_{0}+5{y}_{0}^{2}$ always crosses the $x$ -axis. Here is a video showing how that graph moves around as we let the value of ${y}_{0}$ slowly change.

This corresponds with the fact that the graph of $f(x,y)={x}^{2}+6xy+5{y}^{2}$ can be both positive and negative.

In contrast, consider the case

${a}=2$ ${b}=2$ ${c}=3$

Now, ${{b}}^{2}-{a}{c}={{2}}^{2}-({2})({3})=-2<0$ . This means the graph of $f(x)=2{x}^{2}+4x{y}_{0}+3{y}_{0}^{2}$ never crosses the $x$ -axis, although it kisses it if the constant ${y}_{0}$ is zero. Here is a video showing how that graph changes as we let the constant ${y}_{0}$ vary:

This corresponds with the fact that the multivariable function $f(x,y)=2{x}^{2}+4xy+3{y}^{2}$ is always positive.

## Rule for the sign of quadratic forms

As if to confuse students who are familiar with the quadratic formula, rules regarding quadratic forms are often phrased with respect to ${a}{c}-{{b}}^{2}$ instead of ${{b}}^{2}-{a}{c}$ . Since one is the negative of the other, this requires switching when you say $\ge 0$ and when you say $\le 0$ . The reason mathematicians prefer ${a}{c}-{{b}}^{2}$ is because this is the determinant of the matrix describing the quadratic form:

As a reminder, this is how the quadratic form looks using the matrix.

Tying this convention together with what we found in the previous section, we write

**the rule for the sign of a quadratic form**as follows:- If
, the quadratic form can attain both positive and negative values, and it's possible for it to equal${a}{c}-{{b}}^{2}<0$ at values other than$0$ .$(x,y)=(0,0)$ - If
the form is either always positive or always negative depending on the sign of${a}{c}-{{b}}^{2}>0$ , but in either case it only equals${a}$ at$0$ .$(x,y)=(0,0)$ - If
, the form is always positive, so${a}>0$ is a global minimum point of the form.$(0,0)$

- If
, the form is always negative, so${a}<0$ is a global maximum point of the form.$(0,0)$

- If
- If
, the form will again either be always positive or always negative, but now it's possible for it to equal${a}{c}-{{b}}^{2}=0$ at values other than$0$ $(x,y)=(0,0)$

#### Some terminology:

When ${a}{x}^{2}+2{b}xy+{c}{y}^{2}>0$ for all $(x,y)$ other than $(x,y)=(0,0)$ , the quadratic form and the matrix associated with it are both called

**positive definite**.When ${a}{x}^{2}+2{b}xy+{c}{y}^{2}<0$ for all $(x,y)$ other than $(x,y)=(0,0)$ , they are both

**negative definite**.If you replace the $>$ and $<$ with $\ge $ and $\le $ , the corresponding properties are

**positive semi-definite**and**negative semi-definite**.## Applying this to ${Q}_{f}$

Okay zooming back out to where we started, let's write down our quadratic approximation again:

The quadratic portion of ${Q}_{f}$ is written with respect to $(x-{x}_{0})$ and $(y-{y}_{0})$ instead of simply $x$ and $y$ , so everywhere where the rule for the sign of quadratic forms references the point $(0,0)$ , we apply it instead to the point $({x}_{0},{y}_{0})$ .

As with the single-variable case, when the quadratic approximation ${Q}_{f}$ has a local maximum (or minimum) at $({x}_{0},{y}_{0})$ , it means $f$ has a local maximum (or minimum) at that point. This means

**we can translate the rule for the sign of a quadratic form directly to get the second derivative test**:Suppose $\mathrm{\nabla}f({x}_{0},{y}_{0})=0$ , then

- If
,${{f}_{xx}({x}_{0},{y}_{0})}{{f}_{yy}({x}_{0},{y}_{0})}-({{f}_{xy}({x}_{0},{y}_{0})}{)}^{2}<0$ has a neither minimum nor maximum at$f$ , but instead has a saddle point.$({x}_{0},{y}_{0})$ - If
,${{f}_{xx}({x}_{0},{y}_{0})}{{f}_{yy}({x}_{0},{y}_{0})}-({{f}_{xy}({x}_{0},{y}_{0})}{)}^{2}>0$ definitely has either a maximum or minimum at$f$ , and we must look at the sign of$({x}_{0},{y}_{0})$ to figure out which one it is.${{f}_{xx}({x}_{0},{y}_{0})}$ - If
,${{f}_{xx}({x}_{0},{y}_{0})}>0$ has a local minimum.$f$

- If
,${{f}_{xx}({x}_{0},{y}_{0})}<0$ has a local maximum.$f$

- If
, the second derivatives alone cannot tell us whether${{f}_{xx}({x}_{0},{y}_{0})}{{f}_{yy}({x}_{0},{y}_{0})}-({{f}_{xy}({x}_{0},{y}_{0})}{)}^{2}=0$ has a local minimum or maximum.$f$

## Our current tools are lacking

Everything presented here

*almost*constitutes a full proof, except for one final step.Intuitively, it might make sense that when a quadratic approximation bends and curves in a certain way, the function should bend and curve in that same way near the point of approximation. But how do we formalize this beyond intuition?

Unfortunately, we will not do that here. Making arguments about derivatives fully rigorous requires using real analysis, the theoretical backbone of calculus.

Furthermore, you might be wondering how this generalizes to functions with more than two inputs. There is a notion of quadratic forms with multiple variables, but phrasing the rule for when such forms are always positive or always negative uses various ideas from linear algebra.

## Summary

- To test whether a stable point of a multivariable function is a local minimum/maximum, take a look at the quadratic approximation of the function at that point. It is easier to analyze whether this quadratic approximation has maximum/minimum.
- For two-variable functions, this boils down to studying expression that look like this:These are known as
**quadratic forms**. The rule for when a quadratic form is always positive or always negative translates directly to the second partial derivative test.

## Want to join the conversation?

No posts yet.