I’m familiar with the taxicab number story about Hardy and Ramanujan, but I’m not confident that I would have recalled the number.

In the event, I didn’t recognize it in the multiplication algorithm story, so thanks for pointing that out!

Skimming the paper, I saw that the authors did a lower-bound calculation for the required FFT dimension, and determined that any number greater than 1728 would suffice, so they chose 1729.

The paper ends with a brief discussion of how the application of a number of optimizations could adapt their multiplication to 9D FFTs, though at the cost of complicating the algorithm rather considerably.

I don’t see in their paper a calculation of the lower bound of integer sizes for their technique to run faster than the Schoenhage-Strassen algorithm using 1D FFT, so I don’t know what impact (if any) the proposed optimizations would have on that bound.

]]>Fascinating occurrence of the Ramanujan number there!

]]>At a high level, the security of the RSA algorithm depends on the idea that factoring large numbers is a hard problem but computing those numbers from their factors is easy.

To use RSA, you select a pair of large prime numbers and compute a public key from them which anyone can use to encrypt a message to you. You also compute a second key from those primes which you can use to decrypt those messages.

As long as it’s very hard for someone else to reverse engineer your secret primes from your public key, then RSA is secure. However if that can be done efficiently, then any attacker is able to use your public key to work out your private key and read messages sent to you.

]]>I finally got around to looking at the github page by Léo Ducas for his “SchnorrGate” project:

https://github.com/lducas/SchnorrGate

Ducas is an academic mathematician working in the Netherlands. Lattices are his specialty, and he happens to be currently giving a course on lattices and cryptography.

What I learned:

• Not only has Schnorr been making polynomial-time claims for his factoring strategy since 2009 … he described this strategy in 1991, building on a 1975 (!) factoring paper by Brillhart and Morrison.

In other words, this idea ain’t new, and mathematicians have looked at it quite seriously.

• None of Schnorr’s papers claiming polynomial-time factoring has yet passed peer review.

• The essence of Schnorr’s idea (if I understand it) is that solving a specific lattice problem (SVP) provides an *efficient* way to search for the arithmetic relations which must be accumulated in order to factor the given integer.

Ducas was able (using a specialized software package) to quickly code Schnorr’s relation search algorithm, and test it experimentally. His experiments showed that the search for relations is much less fruitful than Schnorr calculates.

• On the basis of those experiments, Ducas concludes that in order to successfully factor, the size of the lattice problem would have to be scaled up by a really large factor to yield a sufficient quantity of relations.

With sufficiently big lattices for this to succeed, the strategy would almost certainly be slower than GNFS.

================

As an algorithmic aside not relevant to factoring — or even cryptography! — I just learned about work from 2 years ago by Harvey and van der Hoeven proposing a novel multiplication algorithm.

Math-geek readers will be aware that the fastest practical method for multiplying Really Huge Numbers uses the Fast Fourier Transform (FFT). As far as I’m aware, numbers that big are never used in crypto, but this FFT-based multiplication is at the heart of (for example) the Great Internet Mersennse Prime Search (GIMPS).

Engineers are used to thinking of Fourier Transforms in one dimension, though 2D or even 3D FFT is sometimes useful. The multiplication technique used in number theory projects like GIMPS uses one-dimensional FFT.

A conjectured lower bound for the cost of multiplication is O(N log(N)), which is attained in the Harvey and van der Hoeven … but their multiplication requires computing FFT in 1,729 dimensions!

This algorithm is only fastest for numbers too large to be even *represented* in the physical universe … so it’s probably not showing up in a software library any time soon.

https://www.schneier.com/blog/archives/2021/02/chinese-supply-chain-attack-on-computer-systems.html ]]>

Seems like, at least the notion, would be interesting to explore. Me not being noothing like an expert on cryptography or security, perhaps there was some easily overlooked detailed to some implementation that might cause a so called catastrophic failure in the security of a cryptosystem?

If anything, I think I’ve learned after all these years reading this blog, is that the correct implementation of a security system is important to both understand and maintain so that there aren’t any flaws happening, or existing with the past, present or future implementaiton of something.

]]>https://www.reddit.com/r/crypto/comments/lwengh

]]>re: * The value of trying *

Thank you MarkH for the wonderful synopsis.

I remember documentaries about FLT and how it was “solved” and the ultimate analysis was it was solved, but not the way Fermat would have solved it.

There can be many roads to discovery and not all of them successful. Even a failed attempt has value. And some failed attempts turn out not to be failures after all.

]]>The Battle of Austerlitz

Napoleon addressed a wounded Russian soldier, Napoleon recalls:“For having been defeated, one does not cease to be among the brave.”

I have a different perspective on the attitudes of academic mathematicians toward (a) a mathematician’s choice of problem to work on, and (b) publication of unexpected results.

My understanding is that prior to the publication of “Primes is in P,” many workers in the specialized field of complexity theory had formed the opinion that no deterministic polynomial-time algorithm could be found for primality testing.

That being so, colleagues might have looked skeptically at Agrawal for working on the problem of finding such an algorithm (he was already in his 30s). His two young collaborators, however, were on the point of getting their first university degrees, so they weren’t taking any real risk to their reputations.

The situation with the Fermat conjecture (Fermat’s “Last Theorem” or FLT) was quite different. It suffered from a two-part stigma: (a) it was fiendishly hard, having resisted centuries of attempts to crack it; and (b) it was trivial, in the sense that no important mathematical question depended on whether FLT was valid.

For a mathematician to beat his head against this wall, when success could bring newspaper coverage not mathematical fruits, was considered Bad Form. Wiles kept his desire to prove FLT secret, and didn’t do any serious work on it until …

Work by academic mathematicians Gerhard Frey and Ken Ribet showed that if a certain Very Important Conjecture could be proved, then FLT must be true: proof of Fermat would be a corollary.

That Very Important Conjecture was then known as the Taniyama-Shimura (or sometimes Taniyama-Weil) conjecture; now that it’s proven, it’s called the Modularity Theorem.

When Wiles was working on the problem, dozens (if not hundreds) of papers had been published in journals with statements similar in form to “if Taniyama-Shimura is true, then we can prove the following result …”

In other words, proof of that conjecture was going to be extremely consequential.

For Wiles, the Frey/Ribet breakthrough made all the difference: it transformed the difficulty of FLT from “fiendish” to “nobody has an idea of how to do that” [1] and its significance from “trivial” to “one the most important unsolved problems in mathematics.”

Nobody professional in math or computer science has been ridiculed — as far as I’m aware — for serious work on factoring algorithms. The problem is both famous, and of obvious practical importance. Making progress on factoring might not raise your standing as a mathematician, but it gives you star status in the field of cryptography.

Nobody has proved any meaningful lower bounds on the cost of factoring. GNFS is the best yet discovered for RSA moduli — and no better alternative has been found for about 25 years.

However, there is simply no theoretical basis for supposing that GNFS is the best possible. Complexity theorists would be excited to see a big improvement in factoring, but such a breakthrough wouldn’t damage anybody’s theory or reputation.

There’s just no *a priori* reason for mathematicians to suppose that quicker factoring can’t be done. If you can do it … just show them the receipts! I predict that if your work is serious, they will take it seriously.

================

So I shift from choice of problem, to reception of the work.

A. In the case of “PRIMES is in P,” many mathematicians were able to follow the arguments and proofs of their paper. It had the added advantage that it wasn’t difficult to code a program running their algorithm.

Although their result was a big surprise, it was (if I recall those days correctly) accepted as valid within a matter of weeks.

B. Wiles worked on Taniyama-Shimura in total secrecy (until the very end, when he felt the need for some checking by a very select few). It was a very respectable problem, because it was so important; his obsessive secrecy was a function of his personal psychology, rather than reputational risk.

His first paper (containing a defect that would take him months to solve) was on the order of 100 pages in length, and its mathematics so specialized that if everyone on Earth who could understand it without lengthy study had gathered together, they could have ridden together in an elevator car.

Although Wiles’ announcement of a proof was a shock to his profession, his work was taken very seriously, and the necessary experts promptly went to work analyzing his paper statement by statement. I’m not aware of any mathematicians rejecting his claims out-of-hand.

C. A rather sad example is the claimed proof by Louis de Branges of Riemann’s zeta function conjecture (known far and wide as the Riemann Hypothesis or RH).

A poll of mathematicians would probably rank RH as the single most important unsolved problem in their discipline, so working on it is a respectable choice; but it has resisted proof (or disproof) for more than 150 years, so unless one is manifestily working on an approach that is both plausible, and not tried before, it’s very likely a waste of time.

de Branges’ paper is similar in length to Wiles’, and perhaps even more difficult to analyze. The people most qualified to evaluate it might need to devote many months of their working life to its examination.

Before making such an investment, they would probably need to see the “germ of an idea” — what new concept or approach the author has brought to bear, which can plausibly succeed where all others before have failed.

de Branges has not convinced his colleagues that this “germ of an idea” is present, so his paper remains in a sort of limbo, neither accepted nor shown to be false. I can assure you, that many mathematicians despair that they will live to see a proof of RH, and would be delighted to learn of one.

================

By comparison, Schnorr’s paper is neither very long, nor (as far as I understand) does it contain math known only to a handful. Lattice algorithms seem to be fairly well-studied, so there might be dozens of professional mathematicians (and perhaps a similar number of grad students) able to follow the arguments in detail.

As of today, they don’t seem to be buying what Schnorr is selling. The window is still open for him to make his case, but he’s got to show his profession more than he has so far.

[1] The distinction is that although nobody expected a proof of Taniyama-Shimura, it had not resisted so many long and determined attacks as FLT had.

]]>