Engineers’ induction and the Borwein integral

You’re probably already familiar with the story about the mathematician who tries to explain proof by induction to two friends: a physicist and an engineer. After some deep thought, the engineer announces that he’s found a proof by induction that all odd numbers are prime: “Three is prime; five is prime; seven is prime; so all odd numbers are prime.” Trying to be tactful, the mathematician asks gently what happens if he tries to extend the sequence a bit further. The physicist chips in: “That’s easy. Three is prime; five is prime; seven is prime; nine isn’t prime; eleven is prime; thirteen is prime; so all odd numbers are prime to within experimental error…”

Although we laugh at “engineers’ induction” — extrapolating a general statement from the pattern you see in a finite number of cases — most of us have been guilty of it at some point. It can, of course, be a good way to generate conjectures which can then be proved or disproved properly, but it’s always dangerous to put too much faith in patterns.

A classic and well-known example is Moser’s “pizza-slicing” problem, which generates the sequence 1, 2, 4, 8, 16 — and then, unexpectedly, 31. Here, courtesy of a passing comment on Spiked Math, is a less well-known example.

Define the function \mathrm{sinc}(x) = \dfrac{\sin(x)}{x} \ \forall x \neq 0, with \mathrm{sinc}(0)=1 to patch up the removable singularity. Now define the sequence of functions

f_N(x) = \displaystyle\prod_{k=0}^N \mathrm{sinc}\left(\dfrac{x}{2k+1}\right) \qquad \mbox{for} \qquad N \in \mathbb{N}.

For each value of N, we can evaluate the integral

I_N = \displaystyle\int_0^{\infty} f_N(x)\mathrm{d}x,

sometimes called the Borwein integral. If you want to do the evaluation by hand, you can convert the product of sine terms to a sum using standard trig formulae, then integrate by parts until you’re left with something involving the sine integral \mathrm{Si}(x) = \displaystyle\int \dfrac{\sin(x)}{x}\mathrm{d}x. This isn’t an elementary function but it’s well tabulated and understood: in particular, \mathrm{Si}(x) \to \dfrac{\pi}{2} as x \to \infty. Alternatively, use a few lines of Maple code (the example below calculates I_5):

makeintegrand:= proc(k) local i; f(x):= 1; for i from 1 to 2*k+1 by 2 do f(x):= f(x)*sin(x/i)/(x/i) od; end;
makeintegrand(5);
int(f(x),x=0..infinity);

You’ll rapidly discover the following pattern.

I_0 = \displaystyle\int_0^{\infty} \dfrac{\sin(x)}{x}\mathrm{d}x = \dfrac{\pi}{2}.

I_1 = \displaystyle\int_0^{\infty} \dfrac{\sin(x)}{x}\dfrac{\sin(x/3)}{x/3}\mathrm{d}x = \dfrac{\pi}{2}.

and so on up to

I_6 = \displaystyle\int_0^{\infty} \dfrac{\sin(x)}{x}\dfrac{\sin(x/3)}{x/3}\cdots\dfrac{\sin(x/13)}{x/13}\mathrm{d}x = \dfrac{\pi}{2}.

You can guess what’s coming now: call the Maple routine above with makeintegrand(7) and you’ll find that

I_7 = \displaystyle\int_0^{\infty} \dfrac{\sin(x)}{x}\dfrac{\sin(x/3)}{x/3}\cdots\dfrac{\sin(x/13)}{x/13}\dfrac{\sin(x/15)}{x/15}\mathrm{d}x = \dfrac{467807924713440738696537864469\pi}{935615849440640907310521750000}.

Admittedly this isn’t a lot different from \pi/2… but we’re looking at exact results not approximations, so any deviation is rather unexpected! Worse is to come: a little bit of experimentation with series will show you that

I_N = \dfrac{1}{2} + \displaystyle\sum_{n=1}^{\infty} \displaystyle\prod_{k=0}^N\mathrm{sinc}\left(\dfrac{n}{2k+1}\right)

for every value of N up to, er, N = 40248. (OK: I lied about the “little” bit.) Engineers’ induction would lead us pretty thoroughly up the garden path with this one…

Hopefully by this point you want to know what “goes wrong” with the nice identities that we see for small N. Well, I’m not going to tell you, because there’s a very nice paper by Baillie, Borwein & Borwein, called Surprising Sinc Sums and Integrals, which is available online and which is really quite accessible if you’ve done a bit of Fourier analysis and are prepared to swallow the occasional technicality. The key fact about the sinc function is that its Fourier transform is the “boxcar” function G(u) that equals 1 for -1 < u < 1 and zero otherwise. I’ll leave you to follow how this feeds through to the startling results above!

(DP)

Advertisements
This entry was posted in Articles. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s