« April 2004 | Main | August 2004 »

May 2004 Archives

May 1, 2004

The Riemann Hypothesis

Prime Obsession
John Derbyshire
Joseph Henry Press, $28 (cloth)

The Riemann Hypothesis
Karl Sabbagh
Farrar, Straus & Giroux, $25 (cloth)

The Music of the Primes
Marcus du Sautoy
HarperCollins, $25 (cloth)

It is remarkable that three publishers have decided to take a chance on the Riemann Hypothesis, the most famous problem in number theory. Some of the credit belongs to the Clay Mathematics Institute in Cambridge, Massachusetts, which in May 2000 announced a million-dollar prize for the solution of any of seven enduring problems. One of them, the Poincaré Conjecture, has apparently already been resolved by a Russian geometer. The oldest problem on the list, which is still unresolved, is the Riemann Hypothesis.

The Riemann Hypothesis concerns the prime numbers, which have been recognized as the “atoms of arithmetic” since ancient times but have remained much more elusive than that metaphor would imply. Unlike the chemical elements, which when ordered by atomic number exhibit a regular pattern of properties that Mendeleev revealed in his Periodic Table, the primes are scattered with no apparent pattern among the whole numbers. The Riemann Hypothesis is, roughly speaking, a 150-year-old guess about how the primes are spaced along the number line. In the past 30 years computers have been able to give very strong evidence for this guess, and hundreds of papers have been written assuming its validity. Over the decades, the problem has been linked to cryptography and quantum physics and itself represents a deep connection between number theory and other areas of mathematics. It is one of those rare problems that is both intelligible to the uninitiated and of deep mathematical interest. But despite the efforts of generations of the world’s best mathematicians, it has yet to be proved or disproved. A solution may be discovered next week, may be a hundred years away, or may not exist at all.

* * *

A prime number is a whole number that cannot be written as the product of two smaller whole numbers; it can only be divided evenly by itself and 1. In other words, the primes—2, 3, 5, 7, 11, 13, 17, and so on—are all those numbers that can’t be broken down into smaller factors. The earliest evidence of human knowledge of prime numbers may be an 8,500-year-old African bone with a column containing groups of 11, 13, 17, and 19 notches in a row. The ancient Chinese knew about them. The Greeks understood enough about the primes to consider them the building blocks of all numbers, but they were unable to devise a formula to predict the next prime.

And there is always a next one. One of the first great proofs, recorded in Euclid’s Elements, is a simple argument showing that the primes go on forever. Take any finite set of primes: say, 2, 3, 5, and 7. Multiply them together and add 1—in this case you get 211. You have just produced a number that can’t be evenly divided by 2, 3, 5, or 7, because the remainder will always be 1. If this number itself is prime—as 211 is—you have produced a prime that is not in your set. If the number is not prime, then it must be expressible as a product of smaller whole numbers, and we may continue factoring until it is written as a product of primes. None of these factors can be 2, 3, 5, or 7, so you have still produced primes not in your original set. Either way, your list must be incomplete. Since this argument works for any finite list of primes, the number of primes cannot be finite. This is a fantastic argument: without having any idea how to come up with the next prime, Euclid was able to prove that the supply is inexhaustible.

For centuries little more than this was known. Eratosthenes of Cyrene is credited with discovering an algorithm for listing all prime numbers, but no one was able to find a formula for the nth prime. This puzzled mathematicians, who expected such fundamental objects to have some structure. Oddly, many simple questions remain unanswered to this day. For instance, can every even number be expressed as the sum of two primes? Are there infinitely many pairs of “twin primes,” such as 17 and 19, or 41 and 43? As the great Cambridge mathematician G. H. Hardy complained, when it comes to primes, every fool can ask questions that even the wisest man cannot answer.

Johann Carl Friedrich Gauss, who many consider the greatest mathematician of all time, was the one who “uncovered the coin that Nature had tossed to choose the primes,” as Marcus du Sautoy puts it in Music of the Primes. Struck by their unpredictability as a teenager in 1792, Gauss decided to count up the primes in the first 10, 100, 1,000, and 10,000 numbers to see if a pattern emerged. He noticed that the primes got thinner and thinner the further you counted: a full 40 percent of the first ten were prime, but only 25 percent of the first 100 and 16.8 percent of the first thousand were. He chanced on a surprisingly effective formula for estimating these proportions: the number of primes smaller than some number n was approximately n/(logen). (The logarithm is the inverse of the exponential function. If x=by, then logbx=y. Logarithms with base e—around 2.718281—have special properties, and are called natural logarithms.)

Why the natural logarithm should be implicated in the distribution of the primes remained a mystery to him, and without a proof he considered the discovery worthless and kept it hidden in his private notebooks. Nevertheless, Gauss kept fiddling with the formula in hopes of stumbling onto something. His formula becomes less and less accurate as the numbers get bigger, but spurred on by the parallel work of Legendre, Gauss refined his estimate to a function called the logarithmic integral. (The logarithmic integral, or li(x), is defined as the integral from 0 to x of 1/(logex).) The statement that this function gets asymptotically closer to the actual number of primes as the numbers get bigger is known as the Prime Number Theorem. When it was finally proved independently by both Hadamard and de la Vallée Poussin in 1896, it carried a certain air of inevitability. While this was a major advance in the understanding of prime numbers, the formula could not be converted into an accurate measure of how the primes were spaced out.

The Riemann Hypothesis provides the error term for this estimate. To appreciate this, a digression on music is necessary, courtesy of du Sautoy. The Pythagoreans observed that hitting a full urn with a hammer produced one note, a half-full urn sounded a note an octave up, a one-third-full urn sounded a note a fifth above that, and so on up the harmonic spectrum. This observation led them to the harmonic series, defined later as the sum of 1 + 1/2 + 1/3 + 1/4 + 1/5 . . . , which increases without limit.

By the time of Gauss, this series had been generalized into a function, called the zeta function, by taking each divisor to the power of a variable (traditionally s):
(A) ζ(s) = (1/1s) + (1/2s) + (1/3s) + (1/4s) + (1/5s) + . . .

If s=0, you have 1 + 1 + 1 + 1 + . . ., which shoots up to infinity if you try to tally it. If s=1, you have the plain old harmonic series, which also increases to infinity but does so more slowly. But if s=2 you have the infinite sum of 1 + 1/4 + 1/9 + 1/16 + . . . , which was known by Euler’s time to approach a finite quantity, somewhere around 8/5. Through what du Sautoy calls “some pretty reckless analysis,” Euler managed to figure out that this number was exactly π2/6. This was an astounding result: What was the geometric ratio π doing in an innocent arithmetic pattern?

Astonishingly, he also showed that ζ(s) is equal to an infinite product of primes:
(B) ζ(s) = 1/(1-2-s) x 1/(1-3-s) x 1/(1-5-s) x 1/(1-7-s) x 1/(1-11-s) x . . .

In these two expressions for the zeta function, we have uncovered an unlikely link between an infinite sum of counting numbers (A) and an infinite product of prime numbers (B). This shocking correspondence, which John Derbyshire calls the “Golden Key,” turns out to be essential in approximating the error in Gauss’s estimate for the distribution of the primes. It is no wonder, then, that a high premium has been placed on taming the zeta function. The man who held out a tantalizing hope of describing this function completely was Bernhard Riemann, who came to the University of Göttingen, where Gauss was a well-established professor of astronomy, to study theology at the age of 20. He soon switched to mathematics.

In November 1859, Riemann published a paper in the monthly notices of the Berlin Academy, his only contribution to the study of prime numbers. It was a series of remarks with many logical gaps and unproved speculations—it was his style to rely as much on intuitive leaps as incremental reasoning. In an offhand remark several pages in, he takes the innovative step of widening the domain of the zeta function to include the complex numbers, which have both a real and an imaginary part. (An imaginary number is the square root of a negative number: for instance, i is the square root of –1.) He then hazards a legendary guess that has come to be known as the Riemann Hypothesis: “All non-trivial zeros of the zeta function have real part one-half.” This is Derbyshire’s wording; for a satisfying account of what it means, consult his book. For now, simply understand that, if you know all the places where the value of the zeta function equals zero, then you can describe it completely. Setting aside the so-called trivial zeros—it’s easy to show that the zeta function equals zero for all the negative even integers—the values of s for which ζ(s) = 0 must all have both a real part and an imaginary part: the one with the smallest imaginary value is roughly s=(1/2, 14.134725i). If, as Riemann proposes, the real part is always 1/2, then the non-trivial zeros all lie on a straight line. And if they all lie on a straight line—often called the critical line—we can predict the function’s behavior everywhere. Because the zeta function gives the error term for Gauss’s estimate of prime distribution, it would give us the most complete possible account of how the primes behave on the number line. Riemann probably recognized the stakes were this high. And yet before moving on, he calmly noted, “One would of course like to have rigorous proof of this, but I have put aside the search for such proof after some fleeting vain attempts because it is not necessary for the immediate objective of my investigation.”

Riemann’s unproven insight helped link the burgeoning field of number theory to other branches of mathematics, especially geometry and analysis. But the interest in it is not entirely abstract. Today the primes are implicated in the messy world of electronic security. In the 1970s, a technique called public-key encryption was developed to allow people to exchange data securely without agreeing on a code in advance. This revolutionary advance in cryptography, now commonly applied to secure data passing through the Internet, relies on the fact that factoring very large numbers would take millennia by current methods. While a solution to the Riemann Hypothesis would not mean the end of public-key encryption, there is a real threat that related advances in our understanding of the primes could spell catastrophe for electronic commerce and national security.

Setting aside its applications, the study of primes is among the oldest and most active fields of mathematical inquiry. Music of the Primes, written by the young Oxford professor Marcus du Sautoy, makes that history accessible to everyone. John Derbyshire, a conservative columnist and systems analyst, gives a more focused view of the Riemann Hypothesis in Prime Obsession, which, if patiently followed, will give the uninitiated a serious glimpse of the problem. Karl Sabbagh’s The Riemann Hypothesis seems rushed, relying chiefly on interviews with number theorists, and is not in the league of the other two popular books.

While most researchers are convinced that Riemann’s intuition will eventually pan out, many doubt it on principle, and some even suspect that he was wrong. The only sure conclusion is that expressed by the veteran number theorist Andrew Odlyzko—who has calculated some 30 billion zeta zeros over the last 25 years—when Derbyshire prods him to give some odds on the Riemann Hypothesis: “It’s either true or it isn't.”

Jascha Hoffman lives in Brooklyn and writes regularly for the Ideas section of The Boston Globe.

Originally published in the April/May 2004 issue of Boston Review.

May 18, 2004

Bacteria-Eating Viruses

images.jpeg


As bacteria have grown increasingly resistant to standard antibiotics, scientists have begun a desperate search for alternatives to the drugs. In one promising approach, they are trying to harness viruses that naturally evolved to prey on harmful bacteria and to use them as weapons for staving off intruders.

That may sound like a new idea, but it is a revival of an ancient remedy. Therapies that use bacteria-eating viruses -- bacteriophages, or just phage for short -- have been part of traditional cures for centuries in India, and they have been practiced in the former Soviet Union for decades.

They have yet to be properly tested on people, and some researchers are skeptical about unleashing a self-replicating virus into a patient's bloodstream.

As drug companies abandon antibiotics for more reliable products, a handful of start-up companies is developing precision-engineered viruses for medical use. One of the youngest, GangaGen, founded in 2000 and based in San Francisco and Bangalore, uses genetic engineering to control the healing viruses.

GangaGen's name refers to the moment when Western science discovered phage therapies in 1896, when a British chemist noticed that water from the Ganges River in India, traditionally known for its curative properties, was lethal to the cholera bacterium. After gaining attention in Sinclair Lewis's ''Arrowsmith'' in 1925, phage cocktails were marketed by American drug companies for a brief period before being overshadowed by powerful antibiotics developed in World War II.

Phage treatments in the form of oral or topical medications are now poised for a comeback. But they are riskier than drugs. For good results, many strains of virus are often needed; the viruses can multiply quickly in a host; and they may in some cases produce toxic chemicals.

But GangaGen, which will conduct human trials in hospitals in India this year for a virus that attacks staph bacteria, is using genetic engineering to reduce the risks.

Unlike broad-spectrum antibiotics, phages evolved to attack a narrow range of bacteria, so they have traditionally been applied in combinations of three or more. It would be safer to find a single virus that did the job. To that end, GangaGen is pursuing ways to increasing the number of a virus's potential hosts by inserting new genes into its tail.

There is, however, a far more serious problem. The viruses, although they may save lives, can also kill people, ''not just because of the fact that they might be contaminated with something, but also because the viruses themselves can carry toxin genes,'' said Dr. Carl R. Merril, a senior investigator at the National Institutes of Health.

A phage uses its host to make hundreds of copies of itself that then burst out to infect other cells, growing exponentially in dosage. If a phage is allowed to replicate unimpeded, it may acquire a toxin-coding gene from its host, a potentially lethal turn of events.

To eliminate that problem, Dr. Ryland F. Young, a professor of biochemistry at Texas A&M who is also on the GangaGen advisory board, has engineered the first virus that does not burst open its host. [Correction: see below.] This phage kills bacteria like any other, the founder of GangaGen, Dr. Janakiraman Ramachandran, said, but it does not risk unchecked replication or the buildup of toxins.

Some experts said research to modify the viruses was not necessary for effective phage treatments.

''Why would you want to spend a lot of time and money fine tuning phage that already may be out there?'' said Dr. Alexander Sulakvelidze, head of research at Intralytix in Baltimore. The company was an early leader in phage therapy and is developing phage-based food additives to kill salmonella and listeria. ''The beauty is that they're so ubiquitous and natural. Mother Nature has the best genetic engineering lab.''

Although phage therapy has longterm potential as a treatment for illnesses like diarrhea and tuberculosis, Mr. Ramachandran said, GangaGen will focus on medical applications that do not require inserting viruses in the bloodstream.

The companies developing phage products seek money for research and trials. If officials decide that the antibiotics shortage has reached a crisis level or that phage therapy should be integral to biodefense, Dr. Merril said, anything is possible.

—Jascha Hoffman

CORRECTION: An article in Science Times on May 18 about bacteriaphage therapy, which harnesses viruses to fight bacteria, misstated the role of Dr. Ryland F. Young in researching the treatment. Dr. Young studied the way the viruses replicate and burst out of the host cell to invade other cells. His research group was also the first to delete genes in a bacteriphage, preventing the virus from bursting out of the cell after replication. But Dr. Young was not the first researcher to manipulate a phage genetically so that it killed bacteria without bursting out and infecting other cells. Scientists isolated mutant viruses with that property more than four decades ago.

About May 2004

This page contains all entries posted to Jascha Hoffman in May 2004. They are listed from oldest to newest.

April 2004 is the previous archive.

August 2004 is the next archive.

Many more can be found on the main index page or by looking through the archives.

Powered by
Movable Type 3.33