$$ \newcommand{\RR}{\mathbb{R}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\CC}{\mathbb{C}} \newcommand{\NN}{\mathbb{N}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\EE}{\mathbb{E}} \newcommand{\HH}{\mathbb{H}} \newcommand{\SO}{\operatorname{SO}} \newcommand{\dist}{\operatorname{dist}} \newcommand{\length}{\operatorname{length}} \newcommand{\uppersum}[1]{{\textstyle\sum^+_{#1}}} \newcommand{\lowersum}[1]{{\textstyle\sum^-_{#1}}} \newcommand{\upperint}[1]{{\textstyle\smallint^+_{#1}}} \newcommand{\lowerint}[1]{{\textstyle\smallint^-_{#1}}} \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} \newcommand{\partitions}[1]{\mathcal{P}_{#1}} \newcommand{\erf}{\operatorname{erf}} \newcommand{\ihat}{\hat{\imath}} \newcommand{\jhat}{\hat{\jmath}} \newcommand{\khat}{\hat{k}} \newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}} \newcommand{\smat}[1]{\left(\begin{smallmatrix}#1\end{smallmatrix}\right)} $$

19  Divergence and Curl

Vector fields are complicated objects as they can vary in both magnitude and direction at each point. We wish now to use calculus to help us get a better understanding of them: specifically by using derivatives to help us understand how a vector field can affect objects it pushes on.

When we look at a vector field we can qualitatively distinguish different types of behavior going on: for example, in the field below there are regions where it looks like the field is swirling, as well as regions that look like the field is flowing at a constant speed / direction, and yet other regions where it looks to be expanding, or flowing away from a point.

It’s helpful sometimes to think about the behavior of a vector field by imagining it representing a fluid flow: then we can try to quantify these different ways the fluid can be changing: is it spreading out, or is it spinning. We formalize these below using partial derivatives into the concept of divergence and curl.

19.1 Divergence

The divergence of a vector field is a scalar quantity which measures how much the vector field is spreading out or contracting at a point. Imagine a small cloud of particles being blown around by the vector field. If that cloud expands over time, the divergence of the vector field is positive where the cloud is. If its volume contracts, the divergence is negative, and if it stays the same volume the divergence is zero. Here are some motivating examples

Vector fields with positive divergence (left) and negative divergence (right).

A vector field with zero divergence.

How do we come up with an equation that could measure this, for a vector field \(\vec{F}=\langle P, Q\rangle\)? One idea is to look at a small square around a point, and try to understand how much fluid is flowing in or out of that square. If theres a net outflow the fluid is expanding, a net inflow means its contracting, and same flow in as flow out means there’s no divergence. To compute this, we compute the net flow through the horizontal faces as the difference of the vectors there and the net flow through the vertical faces as the difference of the vectors found there.

PICTURE

The total flow is then the sum of these, and in the limit as the size of the box decreases to zero, these differences become derivatives, giving

\[\langle P,Q\rangle \mapsto \frac{\partial P}{\partial x}+\frac{\partial Q}{\partial y}\]

This quantity is the divergence, and can be given an easy-to-remember notation in terms of \(\nabla =\langle \partial_x,\partial_y\rangle\). We are taking the \(x\) derivative of the first component and the \(y\) derivative of the second and then adding the results: this looks an awful lot like a dot product!

Definition 19.1 (Divergence) The divergence of a vector field \(\vec{F}=\langle P,Q\rangle\) is \[\nabla \cdot \vec{F}=\langle \partial_x,\partial_y\rangle\cdot\langle P,Q\rangle=\partial_x P+\partial_y Q\]

Computing the divergence is no more difficult than computing partial derivatives

Example 19.1 Compute the divergence of the following vector fields:

  • \(\langle x,y\rangle\)
  • \(\langle y, x\rangle\)
  • \(\langle x^2,xy\rangle\)
  • \(\langle x^2+3xy-2,4x^3-6xy^3-4y\rangle\)

Now with a formula in hand, we can make sure we understand cases that might not match our initial intuition. Importantly, for a vector field to have divergence it doesn’t have to look like its spreading out: it just has to have more net outflow than inflow.

The vector field on the left has positive divergence: more fluid leaves each small box than enters.

The vector field on the right has zero divergence: even though the vectors are spreading out they are getting shorter, so the net flow through each small box is zero.

Finally its of note that the definition of divergence generalizes directly to higher dimensions, so for 3D we have \[\nabla\cdot \vec{F}=\partial_x P + \partial_y Q + \partial_z R\]

and in \(n\) dimensions, if \(\vec{F}=\langle F_1,F_2,\ldots,F_n\rangle\) then \[\nabla\cdot F =\sum_{i=1}^n \partial_{x_i}F_i\]

19.2 Curl

Let’s return to imagining little balls flowing around in a vector field again. We’ve found a way to quantify if they spread out from eachother or not, and our next goal is to capture their rotational motion. Here its helpful to think just of a single ball at a time flowing along with the fluid flow. For a 2 dimensional vector field we can imagine three possible cases: the flow could cause the ball to start to spin counterclockwise, spin clockwise, or glide along the flow without spinning. We will call these positive curl, negative curl and zero curl respectively.

Vector fields with positive curl (left) and negative curl (right)

A vector field with zero curl.

While seeing a vector field ‘look like its spinning’ can sometimes be a good indicator of curl, its important to remember this isnt exactly what we are looking for. Curl is about the local properties of a vector field: if it causes a small object to spin, not if its spinning itself. Indeed, consider the following vector field

\[\vec{F}=\frac{y}{\sqrt{x^2+y^2}}\hat{i}+\frac{-x}{\sqrt{x^2+y^2}}\hat{j}\]

this vector field consists solely of unit vectors, and if you drop a ball in the field the ball will travel in a circle, but it will not rotate about its own axis as the vectors pushing it on each side are contributing equally. That is, even though the vectors are going around in a circle, it has zero curl.

A vector field on the other hand that does cause something to spin even though it does not go in a circle is \(\vec{F}=\langle y,0\rangle\). Here a ball dropped in the fluid will move along a horizontal line, but the vector pusing on its top will be a different length than the vector pusing on its bottom, which induces rotation. (For me its helpful to think of nearby vectors as being two friends pushing on each of my shoulders: if they push in the same direction with the same strength I won’t spin, but if one pushes stronger than the other I will!)

With a clear qualitative picture in mind, we need to get quantitative about this. What sort of derivative measures rotational motion? Here, we find a use for our old friend the cross product

Definition 19.2 (2 Dimensional Curl) If \(\vec{F}= P\hat{i}+Q\hat{j}\) is a 2 dimensional vector field, its curl is the scalar function \[\nabla\times \vec{F}=\left|\begin{matrix}\partial_x &\partial_y\\ P&Q\end{matrix}\right|=Q_x-P_y\]

Example 19.2 Compute the divergence of the following vector fields:

  • \(\langle x,y\rangle\)
  • \(\langle y, x\rangle\)
  • \(\langle x^2,xy\rangle\)
  • \(\langle x^2+3xy-2,4x^3-6xy^3-4y\rangle\)

This definition also generalizes to 3 dimensions, but not as easily as divergence. Recall that the cross product behaved quite differently between two and three dimensions: in 2d it was simply a number (the signed area of the paralleogram spanned by the input vectors) whereas in 3 dimensions it was a vector (whose length was that area, and whose direction was orthogonal to the two inputs). Likewise, the three dimensional curl is no longer a scalar but a vector: its magnitude gives the rate of rotation and its direction gives the axis.

Definition 19.3 (3 Dimensional Curl) If \(\vec{F}=\langle P,Q,R\rangle\) is a 3-dimensional vector field, its curl is given by the vector field \[\nabla\times\vec{F}=\left|\begin{matrix}\hat{i}&\hat{j}&\hat{k}\\ \partial_x &\partial_y &\partial_z\\ P& Q&R\end{matrix}\right|\]

This definition actually fits together nicely with our 2D definition: given a two dimensional vector field \(\vec{F}=\langle P,Q\rangle\), you can place it in the xy plane in three dimensions by constructing the vector field \(\langle P(x,y),Q(x,y),0\rangle\). As is familiar from our study of equations, the lack of \(z\) in this vector field means that the result is the vectors are horizontal, and the same on every plane parallel to the xy plane. Taking the curl of this yields

\[\nabla\times\langle P,Q,0\rangle = \left|\begin{matrix}\hat{i}&\hat{j}&\hat{k}\\ \partial_x &\partial_y &\partial_z\\ P&Q&0\\\end{matrix}\right|=(Q_x-P_y)\hat{k}=(\nabla \times \vec{F})\hat{k}\]

That is, repeating a 2D vector field vertically gives a vector field whose curl points in the \(z\) direction (this is the axis of rotation for a horizontal vector field, so that makes sense!) and its magnitude is simply our familiar 2-dimensional curl \(Q_x-P_y\).

Beyond three dimensions, curl becomes a much more complicated object to describe: already in four dimensional space the curl turns out to be a six dimensional vector! To be able to understand the curl in dimensions greater than three requires the more sophisticated langauge of differential forms, and is beyond the scope of our course. However, luckily most interesting applications of vector fields to the everyday world around us occur only in 2 and 3 dimensions, and so we will find plenty to discuss staying within this restricted realm.

19.3 Potentials and Antiderivatives

We now have three types of derivative that relate scalar and vector fields:

  • The Gradient \(\nabla f\), which takes a scalar field and outputs a vector field.
  • The Divergence \(\nabla\cdot \vec{F}\) which takes a vector field and outputs a scalar field.
  • The Curl Which takes in a vector field and outputs a scalar (2D) or a vector (3D).

We are on our way to study integrals of vector fields, so it probably comes as no surprise that we might be interested in antiderivatives. Recall from calculus 1 that a function \(g\) is an antiderivative of \(f\) if \(\frac{d}{dx}g = f\). Here we will take a little time to contemplate finding antiderivatives for gradient, divergence, and curl. First, a bit of terminology: following physics we traditionally do not call these antiderivatives but rather potentials (as they arise in describing potential energy, and a potential for the electromagnetic field).

Note to anyone feeling a bit overwhelmed with everything towards the end of a semester: the only one of these we are actually going to need to do often is to find an antiderivative for the gradient; so if you understand the first case well you should be fine.

19.3.1 The Gradient & Conservative Vector Fields

Let’s start with the gradient:

Definition 19.4 (Potential (for Gradient)) A function \(f\) is a potential, or antiderivative, for the vector field \(\vec{F}\) if \(\nabla f = \vec{F}\).

For example, since we know \(\nabla(x^2+y^2)=\langle 2x,2y\rangle\), we can say that the vector field \(\vec{F}=\langle 2x,2y\rangle\) comes from the potential function \(f=x^2+y^2\). Just like derivatives of ordinary functions, potentials are not unique: the function \(x^2+y^2+5\) is also a potential for the same vector field \(\vec{F}\).

How can we find potentials more systematically? For a vector field \(F=\langle P,Q\rangle\) to be the gradient \(\nabla f = \langle f_x,f_y\rangle\) of some potential \(f\), we would need \(P=f_x\) and \(Q=f_y\). These do not directly specify \(f\) but rather specify its \(x\) and$ \(y\) partial derivatives: this suggests we can attempt to solve for \(f\) via integration. Its easiest to see in an example

Example 19.3 Find a potential for the vector field \(\langle 2xy+y, x^2+1\rangle\)

If there is such an \(f(x,y)\) then \(f_x=2xy\) and \(f_y= x^2\). Integrating, \[f=\int f_x dx = \int 2xy \, dx = x^2y+C\] \[f=\int f_y dy = \int x^2y\, dy = x^2y+K\]

Thus, in the one case we have found \(f=x^2y\) (perhaps plus a constant) and in the other we also see \(f=x^2y\) (perhaps plus a constant). Thus we have it \(f=x^2y\) is a potential for \(F\).

Sometimes we have to be a bit more careful with the constants:

Example 19.4 Find a potential for the vector field \(\vec{F}=\langle 2xy+1,x^2+4y^3 \rangle\)

Applying the same trick we integrate \(f_x\) and \(f_y\) to find \[f = \int 2xy+1\, dx = x^2y + x + C \] \[f=\int x^2+ 4y^3\,dy = x^2y+y^4+K\]

Here at first our two expressions for \(f\) seem not to agree. But thinking a bit harder, we recall that \(C\) is a constant of integration from a \(dx\) integral, so \(C\) can actually depend on \(y\), and similarly \(K\) is a constant of integration from a \(dy\) integral so it could have \(x\)’s in it! With this realization, we see the solutions actually are consistent with \(C=y^4\) and \(K=x\), giving the potential

\[f=x^2y+x+y^4\]

So far, finding potentials seems to fit well within our general framework of to do something in multivariable calculus, you just have to do a Calculus I problem multiple times: instead of just finding one integral we now have \(x\) and \(y\) derivatives, so we need to take both an \(x\) and \(y\) integral and compare the results. But there is an important new subtlety here in the multivariable case: not all vector fields have an antiderivative! (Compare this to single variable calculus, where an antiderivative always exists, even if you struggle to write it down).

The reason for this is actually pretty intuitive, so long as we recall the geometrical meaning of the gradient, which points in the direction of steepest increase of \(f\). Consider the vector field below, where all the vectors turn around in a big circle.

If there were a potential \(f\) for this vector field, then these arrows would be pointing in the direction of steepest increase. So, following them around a circle you should expect to find the value of \(f\) getting bigger and bigger (imagine hiking on the graph of \(f\): you’re going uphill the whole time). This leads to trouble after we’ve done a full circle! At that point we should be back to our original location (and so at your original elevation) yet you got there by walking continuously uphill! Such things may exist in art (such as this sketch of an impossible staircase by Escher) but cannot exist in reality. Thus, our circular vector field cannot have come from a potential.

Let’s try this example mathematically, and see where things go wrong. A circular vector field is \(\vec{F}=\langle y, -x\rangle\), and if we were to imagine that \(\vec{F}=\nabla f\) this would imply \(f_x = y\) and \(f_y=-x\). Trying to solve for \(f\) by partial integration gives

\[f=\int y dx = xy + C(y) \hspace{1cm} f=\int -x dy = -xy + C(x)\]

Thus \(f\) has to equal \(xy\) (possibly plus some stuff only involving a \(y\)) and simultaneously has to equal \(-xy\) (possibly plus some stuff only containing \(x\)). There is no such function since \(xy\neq -xy\), so \(f\) cannot exist. This confirms that in contrast to Calc I, having an antiderivative in multivariable calculus is a special property and is not guaranteed. We call such special fields conservative.

Definition 19.5 (Conservative Vector Fields) A vector field \(\vec{F}\) is conservative if it is the gradient of some function \(f\), that is if \(\vec{F}\) has a potential, or \[\vec{F}=\nabla f\]

It would be nice to have a means of determining when a vector field has an antiderivative, and when it does not. Luckily there is an easy calculation to do so.

Theorem 19.1 (Curl of the Gradient is Zero) If \(f(x,y)\) is a function of two variables, then \[\nabla\times\nabla f = 0\]

We check this with a quick calculation: \(\nabla f = \langle f_x,f_y\rangle\) and then \[\nabla\times\nabla f = \nabla\times \langle f_x,f_y\rangle = \left|\begin{matrix}\partial_x & \partial_y \\ f_x & f_y\end{matrix}\right|\]

\[= \partial_x f_y -\partial_y f_x = f_{yx}-f_{xy}=0\]

Where the final equality is true because the order of partial derivatives does not matter!

Exercise 19.1 Check this still holds true in three dimensions, for \(f=f(x,y,z)\). \[\nabla\times\nabla f = \langle 0,0,0\rangle\]

This is useful to us because it gives us a definite check for when a potential cannot possibly exist: if \(\nabla\times F\neq 0\) then there is no chance that \(F=\nabla f\) for some \(f\), as the curl would have to be zero! The converse of this also holds, so long as the vector field is defined everywhere

Theorem 19.2 (Existence of a Potential) Let \(\vec{F}\) be a vector field defined (and differentiable) everywhere on \(\RR^2\) or \(\RR^3\). Then it is possible to find a potential for \(\vec{F}\) if and only if \(\nabla \times\vec{F}=0\)

One must be careful in applying this theorem however: its crucial that the vector field actually be defined everywhere: check for yourself that the vector field of rotating unit vectors below has zero curl, even though we can see it is not a gradient (as it goes in a circle!) This does not contradict our theorem because this vector field is not defined everywhere: it has a division - by - zero problem at the origin!

\[\vec{F}=\left\langle\frac{y}{\sqrt{x^2+y^2}},\frac{-x}{\sqrt{x^2+y^2}}\right\rangle\]

19.3.2 OPTIONAL: Undoing Divergence

Lets turn to investigate a similar question for the divergence: this type of derivative takes a vector field to a scalar field, so the question we should be asking is given a scalar field \(f\), does there exist a vector field \(\vec{F}\) where \(\nabla \cdot F = f\)? Such a vector field would be an antiderivative with respect to divergence. This does not seem to have a standard name, but one could also call it a divergence potential*.

Definition 19.6 (Potential for Divergence) Given a scalar field \(f\), a divergence potential is a vector field \(F\) such that \(\nabla \cdot F = f\).

Again we begin with a simple example: say \(f(x,y)=x+\sin(y)\): can we produce a potential \(\vec{F}\) for this with respect to divergence? If there were such a \(\vec{F}=\langle P,Q\rangle\), this would imply \(\nabla \cdot\vec{F}=P_x+Q_y=x+\sin(y)\), so we have one equation to solve for two unknowns. Such a thing is usually easy to solve: for instance here we could say \(P_x=x\) and \(Q_y=\sin(y)\) to get \(P=x^2/2\) and \(Q=-\cos(y)\) to get \(F=\langle x^2/2,-\cos y\rangle\). Or we could take \(P_x=x+\sin(y)\) and \(Q_y=0\) yielding another solution \(F=\langle x^2/2+x\sin(y),0\rangle\); such vector fields are highly non-unique.

Indeed this second trick shows one way we can always find a vector field whose divergence is \(f\) - no matter what \(f\) is. If we set \(P=\int f(x,y)dx\) and \(Q=0\) then

\[\nabla \cdot \langle P,Q\rangle = \frac{\partial}{\partial x}\int f(x,y)dx + \frac{\partial}{\partial y}0 = f+0=f\]

In stark contrast to the case of the gradient, its very easy to find an antiderivaive for divergence! Its of note that the non-uniqueness here is pretty interesting; we found two vector fields whose divergence is \(x+\sin(y)\) above, but the two vector fields behaved very differently: the ambiguity in antiderivative isn’t just in a single \(+C\) anymore! Investigating this further would take us too far afield so we will not, but for anyone interested, this is just the start of a deep mathematical theory involving real analysis and topology, that has proven very useful in fundamental physics.

19.3.3 OPTIONAL: The Curl and Vector Potentials

Last but certainly not least, we can consider the same type of question for curl. This one does have a standard name due to its use in physics, and is simply called the Vector Potential (though note this term could have equally well applied to the divergence, so one may wish to call it the Curl Potential or Vector Potential for Curl to avoid confusion).

Definition 19.7 (Vector Potential (For Curl)) In two dimensions, given a scalar field \(g\), a vector field \(\vec{F}\) is a vector potential for \(g\) if \(\nabla\times F = g\), so \(F\) is an antiderivative of \(g\) for the curl derivative.

In three dimensions, since curl returns a vector field, we have to consider antiderivaties of vector fields instead: given a vector field \(G\), a vector field \(F\) is a vector potential if \(\nabla \times F = G\).

The existence of a vector potential in this case depends on the dimension. For 2D vector fields (where the curl is a scalar), one can always find a vector potential using the same trick we did for the divergence.

Example 19.5 Find a vector potential \(F\) for the scalar field \(x^2y\).

A potential would be a vector field \(F=\langle P,Q\rangle\) where \(\nabla\times F = Q_x-P_y=x^2y\). We just have one equation to satisfy here so its easy to make up a solution: one is just to set \(Q_x=x^2y\) and \(P_y=0\). Then \(Q=\int x^2y dx = x^3y/3\) and \(P=0\) giving \[F=\langle x^3y/3, 0\rangle\]

In 3D, we have to contend with curl being a vector, which gives a system of three equations that need to simultaneously be solved:

Exercise 19.2 Find a vector potential \(F\) for the vector field \(\vec{G}=\langle y, z, x\rangle\).

Its easy to imagine that this may no longer always be possible, and indeed its not: a computation with divergence and curl provides an obstruction:

Theorem 19.3 Let \(\vec{F}\) be a 3D vector field. Then \(\nabla\cdot\nabla\times F = 0\)

Exercise 19.3 Check this, for an arbitrary vector field \(\vec{F}=\langle P,Q,R\rangle\).

We can use this just like the identity for curl and the gradient, to give a strict constraint on when a vector field \(G\) cannot have a vector potential.

Theorem 19.4 Let \(G\) be a 3D vector field. Then if \(\nabla\cdot G\neq 0\), it is impossible to find a vector potential \(G=\nabla\times F\) for \(G\).