$$ \newcommand{\RR}{\mathbb{R}} \newcommand{\QQ}{\mathbb{Q}} \newcommand{\CC}{\mathbb{C}} \newcommand{\NN}{\mathbb{N}} \newcommand{\ZZ}{\mathbb{Z}} \newcommand{\EE}{\mathbb{E}} \newcommand{\HH}{\mathbb{H}} \renewcommand{\SS}{\mathbb{S}} \newcommand{\DD}{\mathbb{D}} \newcommand{\pp}{^{\prime\prime}} \newcommand{\p}{^\prime} \newcommand{\proj}{\operatorname{proj}} \newcommand{\area}{\operatorname{area}} \newcommand{\len}{\operatorname{length}} \newcommand{\acc}{\operatorname{acc}} \newcommand{\ang}{\sphericalangle} \newcommand{\map}{\mathrm{map}} \newcommand{\SO}{\operatorname{SO}} \newcommand{\dist}{\operatorname{dist}} \newcommand{\length}{\operatorname{length}} \newcommand{\uppersum}[1]{{\textstyle\sum^+_{#1}}} \newcommand{\lowersum}[1]{{\textstyle\sum^-_{#1}}} \newcommand{\upperint}[1]{{\textstyle\smallint^+_{#1}}} \newcommand{\lowerint}[1]{{\textstyle\smallint^-_{#1}}} \newcommand{\rsum}[1]{{\textstyle\sum_{#1}}} \newcommand{\partitions}[1]{\mathcal{P}_{#1}} \newcommand{\erf}{\operatorname{erf}} \newcommand{\pmat}[1]{\begin{pmatrix}#1\end{pmatrix}} \newcommand{\smat}[1]{\left(\begin{smallmatrix}#1\end{smallmatrix}\right)} $$

9  Zooming Out

9.1 One Variable Integration

The quintessential ‘zoom out’ technique in mathematics is integration. It allows us to add up, or integrate together a continuum of infinitesimally small changes into a single finite change. While its definition is in terms of a limit (a Riemann sum, as we reviewed in the chapter on the Fundamental Strategy) the true power of calculus is that we do not need to compute this limit, but instead can antidifferentiate!

Remark 9.1. In calculus classes we often write integration over an interval \([a,b]\) by putting the bounds on the top and bottom of the integration sign, like \(\int_a^b\). You are welcome to continue using this notation, however I will sometimes opt to put the entire interval in the subscript as \(\int_{[a,b]}\) This fits better with notation for double integrals like \(\iint_R\) and other generalizations, where the domain usually appears as a subscript.

Theorem 9.1 (The Fundamental Theorem of Calculus) Let \(f\) be a function defined on \([a,b]\), and \(F\) be an antiderivative of \(f\) - that is, a function such that \(F^\prime(x)=f(x)\). Then we may integrate \(f\) using this antiderivative:

\[\int_{[a,b]}f(x)dx=F(b)-F(a)\]

Because of this, we will use the indefinite integral \(\int fdx\) as a notation for the collection of antiderivatives of \(f\). In this class we’ll assume familiarity with the 1-dimensional integral as seen in a Calculus I and II course. That means, we’ll be free to use antidifferentiation, u-substitution, integration by parts, etc where helpful.

Exercise 9.1 Compute the following integrals, as a refresher of your calculus skills:

\[\int \sin(2q-3)dq\hspace{1cm}\int \frac{x}{x+1}dx\] \[\int y^2 e^{y^3}dy\hspace{1cm}\int t^2e^tdt\]

Besides calculation, theoretical properties of the integral will also be useful in helping us prove things. Two of fundamental properties of the integral are below.

Proposition 9.1 (Subdividing Intervals) If \(f\) is an integrable function on the interval \([a,b]\) and \(c\) is some point inside the interval (that is, \(a<c<b\)), then \[\int_{[a,b]}fdx=\int_{[a,c]}fdx+\int_{[c,b]}fdx\]

When we interpret the integral as area, this theorem is is one of the greek area axioms - but is now not an assumption but rather something we can prove! There’s one other property of the integral that is rather straightforward from its interpretation as area: an integral of a function that has some positive area, but no negative area to cancel it out must be positive!

Proposition 9.2 (Integrating Positive Functions) Let \(f\) be a continuous function, and \([a,b]\) an interval.

  • If \(f(x)\geq 0\) for all \(x\), then \(\int_a^b f(x)dx\geq 0\).
  • If \(f(x)>0\) for all \(x\), then \(\int_a^b f(x)dx>0\).

As a consequence of this, if we have a continuous function \(f\) which is nonnegative on an interval, and we know to be positive at some point, then we know the integral of that function must be positive. This will prove useful to us, so we’ll separate it off as a corollary:

Corollary 9.1 If \(f\) is continuous and nonnegative on \([a,b]\), and \(f\) is nonzero at some point, then \[\int_a^b f(x)dx>0\]

Proof. Say \(f\) is nonnegative on an interval \([a,b]\), and is nonzero (so, necessarily positive) at some point \(c\). Then since \(f\) is continuous there is some small interval \([l,r]\) around \(c\) where \(f\) is positive, and we can break our original interval into three pieces: \[[a,b]=[a,l]\cup [l,r]\cup [r,b]\] By Proposition 9.1, we can break the integral over \([a,b]\) into a sum of integrals over each of these three intervals: \[\int_{[a,b]}fdx = \int_{[a,l]}fdx+\int_{[l,r]}fdx+\int_{[r,b]}fdx\] The first and last of these are nonnegative by Proposition 9.2, since \(f\) is nonnegative on the whole interval. But the middle one is strictly positive as \(f\) is positive on the entire interval \([l,r]\). Thus the overall integral is a sum of a positive number and two others which are either positive or zero: the result is positive! And hence, \[\int_{[a,b]}fdx>0\]

9.2 Multi-Variable integration

If integrals are a means of ‘zooming out’ along a line, how do we zoom out in the plane? We need a higher dimensional analog of the integral, a double integral

Definition 9.1 (Double Integral Riemann Sum)  

Are we going to need a whole new theory of calculus for this? Two dimensional Riemann sums, two dimensional integrals, and a two dimensional fundamental theorem? Happily no! It turns out much of two-dimensional integration can be summed up by saying “do one dimensional integration, but twice”.

Proposition 9.3 (Fubini’s Theorem) An integral over the plane can be computed as two one dimensional integrals, one for the \(x\) variable and one for the \(y\):

\[\int_{I\times J} f(x,y)dA = \int_I\left(\int_J f(x,y) dy\right)dx\]

Thus, there is nothing more to the theory of double integrals than doing a single-variable integral twice! It’s easiest to see via example:

Example 9.1 (Iterated Integrals) Let \(R=[0,2]\times [0,3]\) be a rectangle in the \(x,y\) plane. To compute the integral \(\iint_R xy+1\, dA\), we write this as an integral for \(x\) from \(1\) to \(2\) and an integral of \(y\) from \(0\) to \(3\):

\[\int_{[0,2]}\left(\int_{[0,3]}xy+1 dy\right)dx\] We now compute the inside integral (with respect to \(y\)) first: \[\int_{[0,3]}xy+1 dy=x\frac{y^2}{2}+y\Bigg|_{y=0}^{y=3}=\frac{9}{2}x+3\]

Then, we integrate this with respect to \(x\): \[\int_{[0,2]}\frac{9}{2}x+3 dx= \frac{9}{4}x^2+3x\Bigg|_{x=0}^{x=2}=15\]

Its even possible to have the bounds of the first integral contain the variables of the second integral:

Example 9.2 Compute the iterated integral below: \[\int_0^1\int_{x-3}^{x^2} x(2y+1)dydx\] We begin with the inner integral, which is \(dy\), so the \(x\) is (temporarily) a constant: \[\int_{x-3}^{x^2}x(2y+1)dy=x\left(y^2+y\right)\Bigg|_{x-3}^{x^2}\] \[= x\left((x^2)^2+(x^2)\right)-x\left((x-3)^2+(x-3)\right)\]

\[ =x^5-5x^2-6x \]

Now we’ve finished the inner integral, and we need to proceed to the next one: \[\int_0^1 x^5-5x^2-6xdx = \frac{x^6}{6}-\frac{5}{3}x^3-3x^2\Bigg|_0^1\] \[=\frac{1}{6}-\frac{5}{3}-3=-\frac{9}{2}\]

Exercise 9.2 (Iterated Integrals) For practice, compute the following iterated integrals.

9.3 Power Series

Besides integration, the other zoom-out type technique we saw time and again in introductory calculus was the construction of a power series from the derivatives of a function. Power series constructed this way are often called Taylor Series.

Remark 9.2. Named after Brook Taylor, who introduced them in 1715. However many such series were known earlier, used in the works of Issac Newton in the 1600s, and Madhava in the 1300s

Definition 9.2 (Power Series: Taylor’s Version) A power series is an infinite series of the form \(\sum_{n=0}^\infty a_nx^n\) for some constants \(a_n\). If \(f(x)\) is a function, the Taylor series for \(f\) is a power series that represents the function \(f(x)\) in terms of its derivatives at \(x=0\):

\[\begin{align*} f(x)&=f(0)+f^\prime(0)x+f^{\prime\prime}(0)\frac{x^2}{2}+f^{\prime\prime\prime}(0)\frac{x^3}{3!}+\cdots\\ &=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n} x^n \end{align*}\]

Example 9.3 (Power Series for \(e^x\)) Because the derivative of \(e^x\) is itself, and \(e^0=1\), every derivative of \(e^x\) at \(x=0\) is equal to \(1\), and its power series is \[e^x=\sum_{n=0}^\infty \frac{1}{n!}x^n\]

One of the reasons that power series are such a powerful tool in calculus is the ability to do math with them: we can treat them like any other function; composing them with other functions, differentiate them and integrate them!

Example 9.4 Given that the power series for \(\frac{1}{1-x}\) is \(\sum x^n\), we can find the power series for \(1/(1-2x^2)\) by substituting \(2x^2\) for \(x\):

\[\frac{1}{1-2x^2}=\sum_{n=0}^\infty (2x^2)^n=\sum_{n=0}^\infty 4^nx^{2n}\]

Proposition 9.4 (Calculus With Power Series) Given a power series \(f(x)=\sum_{n}a_nx^n\) we can differentiate and integrate the series term-by-term: \[f^\prime(x)=\sum_{n=0}^\infty a_n(x^n)^\prime=\sum_{n=0}^\infty na_n x^{n-1}\] \[\int fdx = \sum_{n=0}^\infty a_n \left(\int x^n dx\right)=\sum_{n=0}^\infty \frac{a_n}{n+1}x^{n+1}\]

Example 9.5 (The power series for \(\arctan(x)\)) Given the power series \(\frac{1}{1-x}=\sum x^n\), we can create the power series for \(\frac{1}{1+x^2}\) by substituting \(-x^2\) for \(x\):

\[\frac{1}{1+x^2}=\sum_{n=0}^\infty (-x^2)^n=\sum_{n=0}^\infty (-1)^n x^{2n}\]

Now, since \(\frac{1}{1+x^2}\) is the derivative of \(\arctan(x)\), we need only antidifferentiate this series term by term to find the Taylor series for the arctangent: \[\begin{align*}\arctan(x)&=\int\frac{1}{1+x^2}dx\\ &= \int\sum_{n=0}^\infty (-1)^n x^{2n}\,dx\\ &=\sum_{n=0}^\infty (-1)^n \int x^{2n}dx\\ &=\sum_{n=0}^\infty (-1)^n \frac{x^{2n+1}}{2n+1}\\ &=x-\frac{x^3}{3}+\frac{x^5}{5}-\frac{x^7}{7}\cdots \end{align*}\]

Exercise 9.3 Find Power series for the following functions

All of these techniques make power series a very useful tool indeed. But of course those of you who remember Calculus 2 well know that we have so far left out an important and subtle piece of the story: when do power series work at all? Series don’t always converge, and to tell when they do we have a variety of different convergence tests to help us out. Happily, all the series we will come across are power series, where checking convergence is straightforward.

Theorem 9.2 (Radius of Convergence) If \(f(x)=\sum a_n x^n\) is a power series, let \[\alpha = \lim_{n\to\infty} \left|\frac{a_{n+1}}{a_n}\right|\] Then \(f\) converges by the ratio test at \(x\) if \(|\alpha x|<1\), or \(|x|<\frac{1}{\alpha}\).

Remark 9.3. Warning: not all functions have power series, and those that do are called analytic. Happily all functions we will encounter in this course are analytic, so we can push this concern to the back of our minds

This value \(R=\frac{1}{\alpha}\) is called the radius of convergence. Many of the series that will be of use to us in this class (sine, cosine, and their hyperbolic counterparts) converge on the entire real line, and we will not have to worry about such things.