Section25.3Toward the Riemann Hypothesis
Riemann was after bigger fish. He didn't just want an error term. He wanted an exact formula for \(\pi(x)\), one that could be computed. Computed by hand, or by machine, if such a machine came along, as close as one pleased. And this is where \(\zeta(s)\) becomes important, because of the Euler product formula: \[\sum_{n=1}^{\infty} \frac{1}{n^s}=\prod_{p}\frac{1}{1-p^{-s}}\]
Somehow \(\zeta\) does encode everything we want to know about prime numbers. And Riemann's paper, “On the Number of Primes Less Than a Given Magnitude”, is the place where this magic really does happen, and seeing just how it happens is our goal to close the course.
We'll begin by plotting \(\zeta\), to see what's going on.
As you can see, \(\zeta(s)\) doesn't seem to hit zero very often. Maybe for negative \(s\)...
Subsection25.3.1Zeta beyond the series
Wait a minute! What is this plot? Shouldn't \(\zeta\) diverge if you put negative numbers in for \(s\)? After all, then we'd get things like \[\sum_{i=1}^\infty n\] for \(s=-1\), and somehow I don't think that converges.
In fact, it turns out that we can evaluate \(\zeta(s)\) for nearly any complex number \(s\) we desire. The graphic above color-codes where each complex number lands by matching it to the color in the second graphic.
The important point isn't the picture itself, but that there is a picture. Yes, \(\zeta\) can be defined for (nearly) any complex number as input.
One way to see this is by looking at each term \(\frac{1}{n^s}\) in \(\zeta(s)=\sum_{n=1}^\infty \frac{1}{n^s}\). If we let \(s=\sigma+it\) (a long-standing convention, instead of \(x+iy\)), we can rewrite \[n^{-s}=e^{-s\ln(n)}=e^{-(\sigma+it)\ln(n)}=e^{-\sigma\ln(n)}e^{-it\ln(n)}=n^{-\sigma}\left(\cos(t\ln(n))-i\sin(t\ln(n))\right)\] The last step comes from something you may remember from calculus, and that is very easy to prove with Taylor series: \[e^{ix}=\cos(x)+i\sin(x)\; .\]
So at least if \(\sigma>1\), since \(\cos\) and \(\sin\) always have absolute value less than or equal to one, we still have the same convergence properties as with regular series, if we take the imaginary and real parts separately. \[\sum_{n=1}^\infty\frac{\cos(t\ln(n))}{n^s}+i\sum_{n=1}^\infty\frac{\sin(t\ln(n))}{n^s}\]
That doesn't explain the part of the complex plane on the left of the picture above, and all I will say is that it is possible to do this, and Riemann did it. (In fact, Riemann also is largely responsible for much of advanced complex analysis.) As an example, \(\zeta(-1) = -\frac{1}{12}\), which is very close to saying that \[\zeta(-1) = 1+2+3+4+5+6+7+8+9+10+\cdots=-\frac{1}{12}\]
Subsection25.3.2Zeta on some lines
Let's get a sense for what the \(\zeta\) function looks like. First, a three-dimensional plot of its absolute value for \(\sigma\) between 0 and 1 (which will turn out to be all that is important for our purposes).
To get a better idea of what happens, we look at the absolute value of \(\zeta\) for different inputs. Here, we look at \(\zeta(\sigma+it)\), where \(\sigma\) is the real part, chosen by you, and then we plot \(t\) out as far as requested. Opposite that is the line which we are viewing on the complex plane.
You'll notice that the only places the function has absolute value zero (which means the only places it hits zero) are when \(\sigma=1/2\).
Another (very famous) image is that of the parametric graph of each vertical line in the complex plane as mapped to the complex plane. You can think of this as where an infinitely thin slice of the complex plane is “wrapped” to.
The reason this image is so famous is because the only time it seems to hit the origin at all is precisely at \(\sigma=1/2\). There, it hits it lots of times. Everywhere else it just misses, somehow.
That is not one hundred percent true, because it is also zero at negative even integer input, but these are well understood; this is the mysterious part. And so we have the
Riemann Hypothesis \[\text{ All the zeros of }\zeta(s)=\zeta(\sigma+it)\text{ where }t\neq 0\text{ are on }\sigma=1/2\; .\]
The importance of this problem is evidenced by it having been selected as one of the seven Millennium Prize problems by the Clay Math Institute (each holding a million-dollar award), as well as having no fewer than three recent popular books devoted to it (including the best one from our mathematical perspective, Prime Obsession by John Derbyshire).