This afternoon I had the pleasure of listening to Daniel Bernstein give a lecture on point counting methods at the CRM in Montreal. His talk title was “*Counting points as a video game*“, and going in I imagined him announcing a new video game that would use latent CPU cycles to help compute points on curves. Turns out this was not at all what Bernstein had in mind; rather, after a preamble on point counting, he mentioned using GPU’s (graphical processor units) to count points. His point was that the way computers work in practice does not always align with the way we compute the complexity of an algorithm. That is, certain algorithms that we think are faster than others (say multiplication of numbers versus large matrices) might not be that fast on a conventional CPU. When performing extremely intensive computations, you are bound to cope with the physical limitations of the computer architecture.

A lot of work has gone into optimizing GPU’s to perform computations relevant for displaying graphics as quickly as possible. Thus, certain mathematical computations are particularly well-suited to GPU architecture, namely those that are required for graphics. This isn’t a new observation, and you can read a lot more about it here. I’d never heard of it prior to this talk though, and it was interesting. Bernstein of course gave a better explanation of this phenomenon, and gave a nice example relating to point counting.

During the question session after the talk, someone asked about considering other physical limitations of CPU’s, more particularly their dimension. Bernstein mentioned that people have thought about higher dimensional CPU’s before, and the appeal is that it is easier for information to travel about in higher dimensions. Interestingly, another physical barrier has prevented us leaping from 2D to 3D: heat dissipation becomes a real problem in three-dimensional circuits. Researchers are working to overcome these obstacles though, as a quick google search reveals.

This discussion of moving from two to three-dimensional processing prompted Kiran Kedlaya to wonder aloud, tongue-in-cheek, about processors of fractal dimension; as he put it, “there’s a big gap between two and three”. Ideas like this are one of the reasons why I love mathematics. They could be genius, or perhaps completely daft, but in either case they’re very intriguing! (Also, in the interest of not misrepresenting what was said, I believe Keldaya had in mind viewing the global computer network as some hulking beast of fractal dimension, rather than a single CPU of fractal dimension)