Main content

### Course: Multivariable calculus > Unit 3

Lesson 4: Optimizing multivariable functions (articles)# Second partial derivative test

Learn how to test whether a function with two inputs has a local maximum or minimum.

## Background

Not strictly necessary, but used in one section:

Also, if you are a little rusty on the second derivative test from single-variable calculus, you might want to quickly review it here since it's a good comparison for the second

*partial*derivative test.## The statement of the second partial derivative test

If you are looking for the local maxima/minima of a two-variable function $f(x,y)$ , the first step is to find input points $({x}_{0},{y}_{0})$ where the gradient is the $\mathbf{\text{0}}$ vector.

These are basically points where the tangent plane on the graph of $f$ is flat.

The

**second partial derivative test**tells us how to verify whether this stable point is a local maximum, local minimum, or a saddle point. Specifically, you start by computing this quantity:Then the second partial derivative test goes as follows:

- If
, then$H<0$ is a saddle point.$({x}_{0},{y}_{0})$

- If
, then$H>0$ is either a maximum or a minimum point, and you ask one more question:$({x}_{0},{y}_{0})$ - If
,${{f}_{xx}({x}_{0},{y}_{0})}<0$ is a local maximum point.$({x}_{0},{y}_{0})$

- If
,${{f}_{xx}({x}_{0},{y}_{0})}>0$ is a local minimum point.$({x}_{0},{y}_{0})$

(You could also use instead of${{f}_{yy}({x}_{0},{y}_{0})}$ , it actually doesn't matter)${{f}_{xx}({x}_{0},{y}_{0})}$ - If

- If
, we do not have enough information to tell.$H=0$

## Loose intuition

Focus first on this term:

You can think of it as cleverly encoding whether or not the concavity of $f$ 's graph is the same in both the $x$ and $y$ directions.

For example, look at the function

This function has a saddle point at $(x,y)=(0,0)$ . The second partial derivative with respect to $x$ is a positive constant:

In particular, ${{f}_{xx}(0,0)}=2>0$ , and the fact that this is positive means $f(x,y)$ looks like it has upward concavity as we travel in the $x$ -direction. On the other hand, the second partial derivative with respect to $y$ is a negative constant:

This indicates downward concavity as we travel in the $y$ -direction. This mismatch means we must have a saddle point, and it is encoded as the product of the two second partial derivatives:

Since ${{{f}_{xy}(0,0)}}^{2}$ can only be positive, subtracted it will only make the full expression more negative.

On the other hand, when the signs of ${{f}_{xx}({x}_{0},{y}_{0})}$ and ${{f}_{yy}({y}_{0},{y}_{0})}$ are either both positive or both negative, the $x$ and $y$ directions agree about what the concavity of $f$ should be. In either of these cases, the term ${{f}_{xx}({x}_{0},{y}_{0})}{{f}_{yy}({x}_{0},{y}_{0})}$ will be positive.

**But this is not enough!**

## The ${{f}_{xy}^{2}}$ term

Consider the function

where ${p}$ is some constant.

**Concept check**: With this definition of

Because the second derivatives ${{f}_{xx}(0,0)}$ and ${{f}_{yy}(0,0)}$ are both positive, the graph will appear concave up as we travel in either the pure $x$ direction or the pure $y$ direction (no matter what ${p}$ is).

However, watch the following video where we show how this graph changes as we let the constant ${p}$ vary from $1$ to $3$ , then back to $1$ :

What's going on here? How can the graph have a saddle point even though it is concave up in both the $x$ and $y$ directions? The short answer is that other directions matter too, and in this case, they are captured by the term ${p}xy$ .

For example, if we isolate this $xy$ term and look at the graph of $g(x,y)=xy$ , here's what it looks like:

It has a saddle point at $(0,0)$ . This is not because the $x$ and $y$ directions disagree about concavity, but instead because the concavity appears positive along the diagonal direction $\left[\begin{array}{c}1\\ 1\end{array}\right]$ and negative in the direction $\left[\begin{array}{c}-1\\ 1\end{array}\right]$ .

Let's see what the second derivative test tells us about the function $f(x,y)={x}^{2}+{y}^{2}+{p}xy$ . Using the values for the second derivatives you were asked to compute above, Here's what we get:

When $p>2$ , this is negative, so $f$ has a saddle point. When $p<2$ , it is positive, so $f$ has a local minimum.

**You can think of the quantity**${{f}_{xy}({x}_{0},{y}_{0})}$ as measuring how much the function $f$ looks like the graph of $g(x,y)=xy$ near the point $({x}_{0},{y}_{0})$ .

Considering how many directions have to agree with each other, it is actually quite surprising that we only need to consider three values, ${{f}_{xx}(0,0)}$ , ${{f}_{yy}(0,0)}$ and ${{f}_{xy}(0,0)}$ .

The next article gives more detailed reasoning behind the second partial derivative test.

## Summary

- Once you find a point where the gradient of a multivariable function is the zero vector, meaning the tangent plane of the graph is flat at this point, the second partial derivative test is a way to tell if that point is a local maximum, local minimum, or a saddle point.
- The key term of the second partial derivative test is this:

- If
, the function definitely has a local maximum/minimum at the point$H>0$ .$({x}_{0},{y}_{0})$ - If
, it is a minimum.${{f}_{xx}({x}_{0},{y}_{0})}>0$ - If
, it is a maximum.${{f}_{xx}({x}_{0},{y}_{0})}<0$

- If
- If
, the function definitely has a saddle point at$H<0$ .$({x}_{0},{y}_{0})$ - If
, there is not enough information to tell.$H=0$

## Want to join the conversation?

No posts yet.