We are now ready to work with four application facts which we can prove, using these tools. Some have other types of proofs, but number theory combined with calculus really provides a unified framework for a huge number of problems.
- The probability that a random integer lattice point is visible from the origin is \(\frac{6}{\pi^2}\).
- The Dirichlet series for \(f(n)=|\mu(n)|\) is \(\zeta(s)/\zeta(2s)\).
- The average value of \(\phi(n)\) is \(\frac{3n}{\pi^2}\).
- The “prime harmonic series” sum \[\qquad \sum_{n=1}^{\infty}\frac{1}{p_n}\] diverges, with \(p_n\) the \(n\)th prime.
Subsection24.6.1Random integer lattice points
The following graphic will indicate what it means to have a point visible from the origin; is there a point directly between it and the origin or not? Note that the probabilities estimated will vary wildly. Especially at a prime distance one should expect the computed probability to be higher than the theoretical one; why?
It should be pretty clear from the picture that if \((x,y)\) has a gcd, then \(\left(\frac{x}{d},\frac{y}{d}\right)\) is right on the line of sight from the origin to \((x,y)\) so that it is blocked off. So the following fact is the same thing as asking for the probability that two randomly chosen integers are relatively prime.
Proposition24.6.1
The chances that a random integer lattice point is visible from the origin is \(\frac{6}{\pi^2}\).Proof
We will prove the statement about coprime random integers, or at least we will prove as much of it as we are ready for at this point.
First, we know that \(gcd(x,y)=1\) is true precisely if \(x\) and \(y\) are never simultaneously congruent to zero modulo \(p\), for any prime \(p\).
For any given prime \(p\), the chances that two integers will both be congruent to zero is \[\left(\frac{1}{p}\right)\left(\frac{1}{p}\right)\, .\] This works because the probabilities really are independent, since \(p\) is fixed.
Hence the probability that at least one won't be divisible by \(p\) is \[1-\left(\frac{1}{p}\right)\left(\frac{1}{p}\right)=1-\frac{1}{p^2}=1-p^{-2}\, .\]
Next, we make a reasonable assumption:
The probability that any given integer is divisible by \(p\) has nothing to do with whether it is divisible by \(q\) if \(p\) and \(q\) are both prime.
This is not in general true for such properties; if \(n\) is divisible by 4 it has a 100% likelihood of being divisible by 2, while if \(n\) is prime, it has almost no chance of being even.
In this case it is legitimate to multiply the probabilities (even infinitely many, another assumption we will sweep under the rug) so that \[\prod_p (1-p^{-2})=1/\prod_p \frac{1}{1-p^{-2}}=1/\zeta(2)=\frac{6}{\pi^2}\, .\]
This implies that a random pair of integers is prime about 61% of the time.
Subsection24.6.2Dirichlet for the absolute Moebius
Proposition24.6.2
The Dirichlet series for \(\mid\mu(n)\mid\) is \(\zeta(s)/\zeta(2s)\).Proof
With all the tools we've gained, the proof is nearly completely symbolic at this point!
- First, we have the following from the definition of Moebius:
\[\sum_{n=1}^{\infty}\frac{|\mu(n)|}{n^s}=\prod_p\left(1+\frac{1}{p^s}\right)\, .\]
- Since \((1+x)=\frac{1-x^2}{1-x}\), we can rewrite the right-hand side as \[\prod_p\left(1+\frac{1}{p^s}\right)=\frac{\prod_p\left(1-\frac{1}{p^{2s}}\right)}{\prod_p\left(1-\frac{1}{p^s}\right)}\; .\]
- Now just make both parts reciprocals to get familiar friends: \[\frac{\prod_p\left(1-\frac{1}{p^{2s}}\right)}{\prod_p\left(1-\frac{1}{p^s}\right)}=\frac{\prod_p 1/(1-\frac{1}{p^s})}{\prod_p 1/(1-\frac{1}{p^{2s}})}\; ,\] which means the sum is \(\zeta(s)/\zeta(2s)\).
Let's try this out.
Subsection24.6.3The average value of Euler phi
This is a really nice result to have close to the end. Thinking about the average value of \(\phi\) puts together so many themes from this text.
It's useful to try to graph it first.
Before formally proving this, let's look at a significant picture for the proof. This is similar to what we used for the average of \(\tau\) and \(\sigma\).
Notice that the text at each lattice point is the value of horizontal coordinate, multiplied by a factor of Moebius of the vertical coordinate.
Proposition24.6.3
The long-term average value of \(\phi\) is given by \[\lim_{n\to\infty}\frac{\frac{1}{n}\sum_{k=1}^n \phi(k)}{\frac{3}{\pi^2}n}=1\]Proof
We will crucially use these facts:
- \[\sum_{n=1}^\infty\frac{\mu(n)}{n^2}=\frac{1}{\zeta(2)}=\frac{6}{\pi^2}\]
- \[\phi=\mu\star N\Rightarrow \phi(n)=\sum_{d\mid n}\mu(d)\frac{n}{d}\]
Now we will look at the summation function for \(\phi\). We can once again think of it as summing things up in two different ways. Instead of summing along divisors for each \(n\), we can think of it as summing for each divisor along the hyperbolas \(xy=k\) and along each column.
\[\sum_{k=1}^n\phi(n)=\sum_{k=1}^n\sum_{d\mid k}\mu(d)\frac{k}{d}\]
So \[\sum_{k=1}^n\sum_{d\mid k}\mu(d)\frac{k}{d}=\sum_{d=1}^n\mu(d)\left(\sum_{k=1}^{\lfloor \frac{n}{d}\rfloor}k\right)\] Now let's examine the terms of this.
- The inner sum has previously been seen to be \[\frac{1}{2}\left(\frac{n}{d}\right)^2+O\left(\frac{n}{d}\right)\; .\]
- If we plug that in, we get \[\sum_{k=1}^n\phi(n)=\frac{1}{2}n^2\sum_{d=1}^n\frac{\mu(d)}{d^2}+nO\left(\sum_{d=1}^n\frac{\mu(d)}{d}\right)\; .\]
- But we just saw that the first series goes to \(\frac{6}{\pi^2}\) in the long run; its error is \(O(1/n)\), because \(\mu(n)/n^2<1/n^2\) and \(\int x^{-2}=-x^{-1}\). The second piece is certainly \(O(n\ln(n))\).
- Plugging that in, we get that \[\sum_{k=1}^n\phi(n) = \frac{1}{2}n^2\frac{6}{\pi^2}+O(\text{ stuff less than }x^2)\]
So dividing by \(n\) and taking the limit, we get the asymptotic \[\lim_{n\to\infty}\frac{\frac{1}{n}\sum_{k=1}^n\phi(n)}{\frac{3}{\pi^2}n}\]\[=\lim_{n\to\infty}
\frac{ \frac{1}{2}\frac{n^2}{n}\frac{6}{\pi^2} +\frac{1}{n}O(\text{stuff less than }n^2)}
{\frac{3}{\pi^2}n}\]\[=\lim_{n\to\infty}\frac{\frac{3}{\pi^2}n+O(\text{ stuff less than }n)}{\frac{3}{\pi^2}n}=1\; .\]
Subsection24.6.4The prime harmonic series
The divergence of the series created from the reciprocals of prime numbers is not necessarily a particularly obvious fact. Certainly it diverges much, much slower than the harmonic series, which already diverges very slowly.
This proof doesn't actually use Dirichlet series, but has in common with them themes of convergence and estimation, so it is appropriate to end with it.
Proposition24.6.4
The so-called “prime harmonic series”, where \(p_n\) the \(n\)th prime, \[\qquad \sum_{n=1}^{\infty}\frac{1}{p_n}\] diverges.Proof
As with many other series, we will prove it by looking at the "tails" beyond a point that keeps getting further out. In this case, we'll choose the 'tail' beyond the first \(k\) primes.
- So look at \[\sum_{n>k}\frac{1}{p_n}\, .\]
- Any number of the form \[p_1 p_2 p_3\cdots p_k r+1=p_k\#\cdot r+1\] is not divisible by any of those first \(k\) primes.
- So \(p_k\#\cdot r+1\) may be factored as \[p_{n_1}p_{n_2}\cdots p_{n_{\ell}}\; ,\] where all \(n_i>k\).
- That means that somewhere in the \(\ell\)th power of 'tail', \[\left(\sum_{n>k}\frac{1}{p_n}\right)^{\ell}\; ,\] we have the term \[\frac{1}{p_1 p_2 p_3\cdots p_k r+1}\; .\]
Now assume that in fact the series converges; we will derive a contradiction.
First, for some \(k\), the “tail” above is less than \(\frac{1}{2}\), i.e. \(\sum_{n>k}\frac{1}{p_n}<\frac{1}{2}\).
Then let's look at the sum of the \(\ell\)th power of the tail (this makes sense, since the series converges) \[\sum_{\ell=1}^{\infty}\left(\sum_{n>k}\frac{1}{p_n}\right)^{\ell}\leq \sum_{\ell=1}^{\infty}\frac{1}{2^{\ell}}=2\, .\] Somewhere within this sum of the \(\ell\)th powers, every single term of the form \(\frac{1}{p_1 p_2 p_3\cdots p_k r+1}\) will appear.
(This is since all of these were are products of things like \(\frac{1}{p_{n_i}}\) for \(i>k\), though how many such primes (\(\ell\)) are involved is heavily dependent upon \(r\).)
So the series of reciprocals of these terms \[0<\sum_{r=1}^{\infty}\frac{1}{p_1 p_2 p_3\cdots p_k r+1}\leq \sum_{\ell=1}^{\infty}\left(\sum_{n>k}\frac{1}{p_n}\right)^{\ell}\, .\] This makes sense since we have added up all of them, no matter how many primes – from the case where \(p_1 p_2 p_3\cdots p_k r+1\) is itself prime to the case where it is a big product \(p_{n_1}p_{n_2}\cdots p_{n_{\ell}}\).
This series should still converge (by comparison, for instance), but instead \[\sum_{r=1}^{\infty}\frac{1}{p_1 p_2 p_3\cdots p_k r+1}>\sum_{r=1}^{\infty}\frac{1}{p_1 p_2 p_3\cdots p_k r+p_1 p_2 p_3\cdots p_k}=\frac{1}{p_1 p_2 p_3\cdots p_k}\sum_{r=1}^{\infty}\frac{1}{r+1}\] and this final series certainly diverges, as a multiple of the tail of the harmonic series!
But we assumed that the tail of the original series was finite, so it cannot diverge, yet it is bigger than one which diverges. That is a contradiction, so the series in fact diverges!