5 Brilliant Mathematicians – 4 Crappy Commentaries

I still tend to call myself a historian of mathematics although my historical interests have long since expanded to include a much wider field of science and technology, in fact I have recently been considering just calling myself a historian to avoid being pushed into a ghetto by those who don’t take the history of science seriously. Whatever, I have never lost my initial love for the history of mathematics and will automatically follow any link offering some of the same. So it was that I arrived on the Mother Nature Network and a blog post titled 5 brilliant mathematicians and their impact on the modern world. The author, Shea Gunther, had actually chosen 5 brilliant mathematicians with Isaac Newton, Carl Gauss, John von Neumann, Alan Turing and Benoit Mandelbrot and had even managed to avoid the temptation of calling them ‘the greatest’ or something similar. However a closer examination of his commentaries on his chosen subjects reveals some pretty dodgy not to say down right crappy claims, which I shall now correct in my usual restrained style.

He starts of fairly well on Newton with the following:

There aren’t many subjects that Newton didn’t have a huge impact in — he was one of the inventors of calculus, built the first reflecting telescope and helped establish the field of classical mechanics with his seminal work, “Philosophiæ Naturalis Principia Mathematica.” He was the first to decompose white light into its constituent colors and gave us, the three laws of motion, now known as Newton’s laws.

But then blows it completely with his closing paragraph:

We would live in a very different world had Sir Isaac Newton not been born. Other scientists would probably have worked out most of his ideas eventually, but there is no telling how long it would have taken and how far behind we might have fallen from our current technological trajectory.

This is the type of hagiographical claim that fans of great scientists tend to make who have no real idea of the context in which their hero worked. Let’s examine step by step each of the achievements of Newton listed here and see if the claim made in this final paragraph actually holds up.

Ignoring the problems inherent in the claim that Newton invented calculus, which I’ve discussed here, the author acknowledges that Newton was only co-inventor together with Leibniz and although Newton almost certainly developed his system first it was Leibniz who published first and it was his system that spread throughout Europe and eventually the world so no changes here if Isaac had not been born.

Newton did indeed construct the first functioning reflecting telescope but as I explained here it was by no means the first. It would also be fifty years before John Hadley succeeded in repeating Newton’s feat and finally making the commercial production of reflecting telescopes viable. However Hadley also succeeded in making working models of James Gregory’s reflecting telescope, which actually predated Newton’s and it was the Gregorian that, principally in the hands of James Short, became the dominant model in the eighteenth century. Although to be fair one should mention that William Herschel made his discoveries with Newtonians. Once again our author’s claim fails to hold water.

Sticking with optics for the moment it is a little know and even less acknowledge fact that the Bohemian physicus and mathematician Jan Marek Marci (1595 – 1667) actually decomposed white light into its constituent colours before Newton. Remaining for a time with optics, James Gregory, Francesco Maria Grimaldi, Christian Huygens and Robert Hooke were all on a level with Newton although none of them wrote such an influential book as Newton’s Optics on the subject. Now this was not all positive. Due to the influence won through the Principia, The Optics became all dominant preventing the introduction of the wave theory of light developed by Huygens and Hooke and even slowing down its acceptance in the nineteenth century when proposed by Fresnel and Young. If Newton hadn’t been born optics might even have developed and advance more quickly than it did.

This just leaves the field of classical mechanics Newton real scientific monument. Now, as I’ve pointed out several times before the three laws of motion were all borrowed by Newton from others and the inverse square law of gravity was general public property in the second half of the seventeenth century. Newton’s true genius lay in his mathematical combination of the various elements to create a whole. Now the question is how quickly might this synthesis come about had Newton never lived. Both Huygens and Leibniz had made substantial contribution to mechanics contemporaneously with Newton and the succeeding generation of French and Swiss-German mathematicians created a synthesis of Newton’s, Leibniz’s and Huygens’ work and it is this that is what we know as the field of classical mechanics. Without Newton’s undoubtedly massive contribution this synthesis might have taken a little longer to come into being but I don’t think the delay would have radically changed the world in which we live.

Like almost all great scientists Newton’s discoveries were of their time and he was only a fraction ahead of and sometimes even behind his rivals. His non-existence would probably not have had that much impact on the development of history.

Moving on to Gauss we will have other problems. Our author again makes a good start:

Isaac Newton is a hard act to follow, but if anyone can pull it off, it’s Carl Gauss. If Newton is considered the greatest scientist of all time, Gauss could easily be called the greatest mathematician ever.

Very hyperbolic and hagiographic but if anybody could be called the greatest mathematician ever then Gauss would be a serious candidate. However in the next paragraph we go off the rails. The paragraph starts OK:

Carl Friedrich Gauss was born to a poor family in Germany in 1777 and quickly showed himself to be a brilliant mathematician. He published “Arithmetical Investigations,” a foundational textbook that laid out the tenets of number theory (the study of whole numbers).

So far so good but then our author demonstrates his lack of knowledge of the subject on a grand scale:

Without number theory, you could kiss computers goodbye. Computers operate, on a the most basic level, using just two digits — 1 and 0

Here we have gone over to the binary number system, with which Gauss book on number theory has nothing to do, what so ever. In modern European mathematics the binary number system was first investigated in depth by Gottfried Leibniz in 1679 more than one hundred years before Gauss wrote his Disquisitiones Arithmeticae, which as already stated has nothing on the subject. The use of the binary number system in computing is an application of the two valued symbolic logic of George Boole the 1 and 0 standing for true and false in programing and on and off in circuit design. All of which has nothing to do with Gauss. Gauss made so many epochal contributions to mathematics, physics, cartography, surveying and god knows what else so why credit him with something he didn’t do?

Moving on to John von Neumann we again have a case of credit being given where credit is not due but to be fair to our author, this time he is probably not to blame for this misattribution.  Our author ends his von Neumann description as follows:

Before his death in 1957, von Neumann made important discoveries in set theory, geometry, quantum mechanics, game theory, statistics, computer science and was a vital member of the Manhattan Project.

This paragraph is fine and if Shea Gunther had chosen to feature von Neumann’s invention of game theory or three valued quantum logic I would have said fine, praised the writer for his knowledge and moved on without comment. However instead our author dishes up one of the biggest myths in the history of the computer.

he went on to design the architecture underlying nearly every single computer built on the planet today. Right now, whatever device or computer that you are reading this on, be it phone or computer, is cycling through a series of basic steps billions of times over each second; steps that allow it to do things like render Internet articles and play videos and music, steps that were first thought up by John von Neumann.

Now any standard computer is called a von Neumann machine in terms of its architecture because of a paper that von Neumann published in 1945, First Draft of a Report on the EDVAC. This paper described the architecture of the EDVAC one of the earliest stored memory computers but von Neumann was not responsible for the design, the team led by Eckert and Mauchly were. Von Neumann had merely described and analysed the architecture. His publication caused massive problems for the design team because the information now being in the public realm it meant that they were no longer able to patent their innovations. Also von Neumann’s name as author on the report meant that people, including our author, falsely believed that he had designed the EDVAC. Of historical interest is the fact that Charles Babbage’s Analytical Engine in the nineteenth century already possessed von Neumann architecture!

Unsurprisingly we walk straight into another couple of history of the computer myths when we turn to Alan Turing.  We start with the Enigma story:

During World War II, Turing bent his brain to the problem of breaking Nazi crypto-code and was the one to finally unravel messages protected by the infamous Enigma machine.

There were various versions of the Enigma machine and various codes used by different branches of the German armed forces. The Polish Cipher Bureau were the first to break an Enigma code in 1932. Various other forms of the Enigma codes were broken by various teams at Bletchley Park without Turing. Turing was responsible for cracking the German Naval Enigma. The statement above denies credit to the Polish Cipher Bureau and the other 9000 workers in Bletchley Park for their contributions to encoding Enigma.

Besides helping to stop Nazi Germany from achieving world domination, Alan Turing was instrumental in the development of the modern day computer. His design for a so-called “Turing machine” remains central to how computers operate today.

I’ve lost count of how many times that I’ve seen variations on the claim in the above paragraph in the last eighteen months or so, all equally incorrect. What such comments demonstrate is that their authors actually have no idea what a Turing machine is or how it relates to computer design.

In 1936 Alan Turing, a mathematician, published a paper entitled On Computable Numbers, with an Application to the Entscheidungsproblem. This was in fact one of four contemporaneous solutions offered to a problem in meta-mathematics first broached by David Hilbert, the Entscheidungsproblem. The other solutions, which needn’t concern us here, apart from the fact that Post’s solution is strongly similar to Turing’s, were from Kurt Gödel, Alonso Church and Emil Post. Entscheidung is the German for decision and the Entscheidungsproblem asks if for a given axiomatic system whether it is also possible with the help of an algorithm to decide if a given statement in that axiom system is true or false. The straightforward answer that all four men arrived at by different strategies is that it isn’t. There will always be undecidable statements within any sufficiently complex axiomatic system.

Turing’s solution to the Entscheidungsproblem is simple, elegant and ingenious. He hypothesised a very simple machine that was capable of reading a potentially infinite tape and following instruction encoded on that tape. Instruction that moved the tape either right or left or simply stopped the whole process. Through this analogy Turing was able to show that within an axiomatic system some problems would never be Entscheidbar or in English decidable. What Turing’s work does is, on a very abstract level, to delineate the maximum computability of any automated calculating system. Only much later, in the 1950s, after the invention of electronic computers a process in which Turing also played a role did it occur to people to describe the computational abilities of real computers with the expression ‘Turing machine’.  A Turing machine is not a design for a computer it is term used to described the capabilities of a computer.

To be quite open and honest I don’t know enough about Benoit Mandelbrot and fractals to be able to say whether our author at least got that one right, so I’m going to cut him some slack and assume that he did. If he didn’t I hope somebody who knows more about the subject that I will provide the necessary corrections in the comments.

All of the errors listed above are errors that could have been easily avoided if the author of the article had cared in anyway about historical accuracy and truth. However as is all to often the case in the history of science or in this case mathematics people are prepared to dish up a collection of half baked myths, misconceptions and not to put too fine a point on it crap and think they are performing some sort of public service in doing so. Sometimes I despair.

 

21 Comments

Filed under History of Computing, History of Logic, History of Mathematics, History of Optics, History of Physics, History of science, Myths of Science, Newton

21 responses to “5 Brilliant Mathematicians – 4 Crappy Commentaries

  1. The biggest puzzle for me, is why Mandelbrot is on that list at all. Maybe his work got some good press, but it hardly seems of any importance in mathematics.

    • Bob O'H

      That was my thought too. The choice does seem to be geared towards the computer sciences. And Mandelbrot helped them draw pretty pictures.

  2. Michael Weiss

    I really enjoyed this post. I love this sort of alternative history exercise; I remember whiling many happy lunchtime hours in high school debating the question, “If Lenin had slipped on a banana peel and broken his neck, would Russia have gone Communist?”

    Then of course there’s always the lagniappe of new (for me) gems, this time the details about Jan Marek Marci. Which brings me to a question about the history of optics.

    Newton’s famous letter to the Royal Society starts off, after one throat-clearing sentence: I procured me a Triangular glass-Prisme, to try therewith the celebrated Phænomena of Colours. Obviously producing a spectrum held no great novelty by 1666. Newton’s contribution, as I understood it, lay in the idea that the colors were pure and white light a mixture, as opposed to the idea that the prism somehow adulterated pure white light to produce colors. The idea, that is, and also the numerous supporting experiments.

    Did Marci or some other predecessor have this same theory, or was his innovation the first use of the prism?

    On the subject of the wave theory, I’ve run across the claim occasionally that Newton’s corpuscular theory wasn’t so wrong after all, since he did incorporate some wavish ideas (“fits of easy reflexion and easy refraction”), and anyway doesn’t the quantum theory of light say that it’s both a wave and a particle? E.T. Whittaker, for example, makes this argument in an introduction to my copy of the Opticks. Personally I think this comparison is nuts, err, wrong-headed, but I’d be interested in your take on it.

    A nit to pick about the Entscheidungsproblem. Gödel did not give a solution to this. He did give an independent definition of recursive function, as did Turing, Post, and Church+Kleene, and also Herbrand; such a formalization of the notion of computability is a prerequisite for any solution to the Entscheidungsproblem. Also of course Gödel’s proof of his 1931 result, the Incompleteness Theorem, contains the essential technique that Turing used in his solution. Finally, it was Turing’s paper that convinced Gödel that all the various equivalent definitions of “recursive” did in fact capture the intuitive concept of computability.

    people are prepared to dish up … crap and think they are performing some sort of public service in doing so. Sometimes I despair.

    Don’t despair! View their public service as providing fodder for your marvelous essays.

    • Marci, like Newton, actually came to the conclusion that white light is composed of coloured light. He also demonstrated, like Newton, that if light is split into its constituent colours then each of the coloured rays if refracted further doesn’t change thereby refuting the theory that colour is darkening of white light through refraction. His theories of optics are by no means as deep or as extensive as Newton’s and he got much wrong but on this one central point he anticipated Newton.

      Newton, during his dispute over the nature of light with Hooke and Huygens in the 1670s, actually showed that all of his experimental results are consistent with a wave theory of light, producing such a wave theory that was in fact superior to that of his critics. However in his Optics he stuck to a particle theory, his excursion into wave theory being forgotten. You are of course perfectly right in thinking that Newton was at least partially rehabilitated with the introduction of Quantum theory, which does indeed propagate a particular theory of light, the photon. This is something that historians of optics tend to ignore when they chastise Newton for hindering the acceptance of a wave theory.

      If I was being finicky I would say you are at least partially right on Gödel and the Entscheidungsproblem but I didn’t want to go into too much detail. However I will quote Martin Davis from the bible on the subject, his The Undecidable, “In the anthology, there are basic papers of Gödel, Church, Turing, and Post in which the class of recursive functions was singled out and seen to be just the class of functions that can be computed by finite algorithms”.

      • Michael Weiss

        Fascinating. I had no idea Newton went that far with wave theory (other than discovering Newton’s rings).

        Anent the Entscheidungsproblem: we have to distinguish three things: (a) the existence of undecidable propositions in formal systems such as Peano arithmetic (PA) — here Gödel bears the laurels unshared; (b) the lack of an algorithm to determine which statements of PA are provable, i.e., the Entscheidungsproblem for PA — here the wreaths go to Turing and Church, but not to Gödel, although the work borrows techniques from Gödel’s 1931 paper that settled (a); (c) the formal definition of a recursive function, and the Church-Turing thesis that this captures the intuitive notion of provability.

        As you note, many minds took part in (c). In his 1934 lectures, Gödel gave the modern definition of a general recursive function (giving credit to Herbrand for the key idea), but in a letter to Martin Davis, Gödel explicitly disavowed anticipating the Church-Turing thesis. In a postscript he later added to the 1934 lectures, Gödel stated that it was Turing’s paper (and not Church’s) that convinced him of the Church-Turing thesis.

        As an amusing sidelight, Kleene gave an account of his Ph.D. work under Church, on Church’s lambda-calculus, which lead to the C-T thesis. Initially they doubted that it would be possible even to compute subtraction in the lambda-calculus! Kleene figured out how to do that, and successively overcame a whole series of computational challenges that his advisor set him. Eventually Church became convinced that all computable functions could be computed with the lambda-calculus.

        Post was an amazing fellow. As quoted in the Davis book you mention: “As for any claims I might make, perhaps the best I can say is that I would have proved Gödel’s Theorem in 1921 — had I been Gödel.”

  3. Michael Weiss

    I decided to have another look at Dawson’s Gödel bio, and ran across this endnote [470], vis-a-vis the history of the computer:

    Herman H. Goldstine has given a detailed account of the IAS computer project in his book [The Computer from Pascal to von Neumann]. In assessing the significance of von Neumann’s contributions to computer science, Goldstine states (p.191–192) “Von Neumann was the first person, as far as I am concerned, who understood explicitly that a computer essentially performed logical functions… Today this sounds so trite as to be almost unworthy of mention. Yet in 1944 it was a major advance in thinking.” It is now clear that Turing had come to the same realization independently and at about the same time. Just how revolutionary an insight it was may be judged from a statement made by another computer pioneer, Howard Aitken, in 1956: “If it should turn out that the basic logics of a machine designed for the numerical solution of differential equations coincide with the logics of a machine intended to make bills for a department store, I would regard this as the most amazing coincidence I have ever encountered” (Ceruzzi 1983, p.43; quoted in Davis 1987, p.140).

    • I fail to see the relevance of this quote to my post. However I think Goldstine is probably wrong. I suspect that in this case von Neumann was anticipated by Turing, Claude Shannon and Konrad Zuse and Norbert Weiner if not by others.

      • Michael Weiss

        I don’t know that much about the history of computers (as opposed to math logic and recursion theory); I posted the comment as an interesting sidelight, especially the Aitken quote, not as taking issue with anything you wrote. I haven’t read Goldstine’s book, which is a bit old anyway (1972).

    • The quote from Aiken is actually taken out of context. There is a short paper by Jake Copeland, “Unfair to Aiken”, where the true context is explained. Aiken was not talking about the theoretical limitations of formal systems, but about the practical limitations of the scientific computers of his time when used for business accounting. His talk actually made a lot of engeneering sense.

      To this I may add that even Turing equivalence is often taken out of context, when applied to computer architectures as opposed to computer languages. Turing equivalence needs a potentially infinite store for intermediate results, so no machine ever built is actually Turing equivalent. On the other end, if you modifiy the definition of Turing equivalence to take finite memory into account, then it easily turns out that nearly all machines become Turing equivalent according to the new definition. The fact is, equivalence can be obtained in ridicolously convoluted ways starting from very simple mechanisms. I think Turing equivalence is used as a place holder for what one really wants to say: that a machine can implement any real world algorithm in a reasonable way. This is a fuzzy, engeneering statement, not a mathematical one.

  4. The portrayal of Mandelbrot’s discovery of fractal geometry in the bio is similar to Newton discovering calculus, especially with the sparse bio the Mother Nature News article gives him. Fractal curves had been studied at least since the early 1900s since that is when Koch developed his snowflake fractal, though not with full awareness of what they were. Hausdorff’s generalization of dimension allowed for fractional dimensions, which is the defining feature of a fractal. I’m not sure if Hausdorff himself studied fractals. Mandelbrot was one of the first to be able to use a computer to study fractals in depth and bring together different ideas into the field of fractal geometry. But I think other people were getting there. The brilliant part that seems ironic for a nature publication to leave out is Mandelbrot’s realization of the breadth of the field by discovering the prevalence of fractals in nature..

    • Michael Weiss

      The brilliant part that seems ironic for a nature publication to leave out is Mandelbrot’s realization of the breadth of the field by discovering the prevalence of fractals in nature.

      Even there, Mandelbrot was inspired by earlier results of Lewis Fry Richardson, as he notes in his celebrated paper “How Long is the Coastline of Britain?”

      The Mandelbrot set is probably his most famous discovery. There he was following in the footsteps of Gaston Julia and Pierre Fatou, who studied Julia sets and Fatou sets in 1917-1918. I’m not sure if the debt was explicit in that case.

      Shoulders of giants.

  5. Thank you for this wonderful and informative blog. I have discovered it today, but I already know that I am going to spend much time reading what you are writing here 🙂

    I am a bit confused by the use of the “von Neumann architecture” locution, both by you and the blog author you are quoting.
    I am not questioning the fact that von Neumann very likely had little to do with its invention. My problem, instead, is that I used to think that “von Neumann architecture” explicitly related to the storage of both programs and data in the same read/write memory — a synonim of “stored program architecture”. But both of you, instead, seem to use the term to describe an architecture with one sequential instruction interpreter. If this is what you mean, then you are right that this should be more properly called a “Babbage architecture”. Even if we don’t want to go that far, this architecture was already in use in the Harward Mark I, desined and bult by Howard Aiken and IBM, and in the Zuse machines. All of these machines (including Babbage’s) could interpret a sequential program built from a fixed set of instructions coded by holes in some kind of tape. None of them, however, is a stored program architecture, since the program is read from a read only memory (the tape) distinct from the data. Therefore, I would have never called Babbage’s Analytical Engine a von Neumann machine, as you do above. Am I missing something?

    • Your understanding of von Neumann architecture is perfectly correct. Babbage’s Analytical Engine does have a stored program architecture.

      • Thanks for the quick reply.

        “Babbage’s Analytical Engine does have a stored program architecture.”

        This is completely (and indeed exiciting) news to me. I had never heard of this before, and I have read several papers on Babbage from the IEEE Annals of Computer History, including those of the late Alan Bromley. Can you point me to some document where I can read more about it? I would very much appreciate that.

      • I am sorry, but I have to insist on this. Either me or you must have misunderstood the definition of von Neumann architecture, or the architecture of the Analytical Engine. Or there must be some other plan by Babbage completely different from the one studied by Blomley and the one that Swade and Cumming are going to build.

        As far as I know, the Analytical Engine does not fetches instructions from the same read/write store where data lives. You have the store, and you have the units that read the punched cards. The machine has no direct means to interpret data as instructions, nor the punched program can modify itself. This is a level of sophistication that it is historically inappropriate to attribute to Babbage.

  6. The machine has no direct means to interpret data as instructions, nor the punched program can modify itself.

    I know von Neumann made a big deal of this. It is crucial for compilation, but other than that, how significant is this feature for modern computing?

    • Not much, really. Self modification is also usually prevented for normal (userspace) programs, and discouraged in general.

      I don’t think it is even that crucial for compilation, since there you have a program (the compiler) that writes some data that only later is loaded and run as a program. You can easily provide for this in a non von Neumann machine: an Analytical Engine “compilator” program could have ouput the object code in punched cards, then the operator could have fed those cards back to the program readers.

      Virtual memory, perhaps, is something that really needs runtime interchangeability of data and code, since there you need to swap pages in memory, and it would be very limiting if you could only swap code pages with other code pages and data pages with other data pages.

      At any rate, having both code and data in the same read/write memory is what characterizes a von Neumann architecture (as I understand it), and this is the interface that most modern machines offer, to the system programmers at least. It does involve a further step of mental sophistication w.r.t. Babbage’s architecture (as I understand it), and it has played a role in the development of computer engeneering and science.

      • ateixeira

        Hi Thony!

        Sorry to bother with an off topic question but I’d like to know if you know of any good online resources on quantum mechanics history. Be it a list of blogs or a list of posts.

        Thanks in advance.

  7. @ateixeira:
    This isn’t a direct answer to your question, but are you familiar with the stackexchange family of websites? They are forums for asking questions just like yours. The ideal place is the History of Science and Math stackexchange — or would be, if it existed! It’s been proposed, and is in the “commit” phase right now, which means that more people have to promise to participate on the site before The Powers That Be will create it. I urge you (and other readers of this blog) to do so at area51. (Area51 is the subsite devoted to making proposals for new stackexchange sites. You do have to register in order to commit, but that step is trivial.)

    In the meantime, I’d suggest posting your question to the physics stackexchange site, with the tag “history”. Or just browse that tag in physics, you’ll find lots of interesting stuff.

  8. Daniel N.

    So, Mandelbrot is a greater mathematician than Euler. Good to know :/

    • Well actually, the subhead of the Mother Nature article says just, “We owe a great debt to scores of mathematicians who helped lay the foundation for our modern society with their discoveries. Here are some of the most important.” It doesn’t say that these five are more important than any of those not mentioned.

      That said, bracketing Mandelbrot with the other four does seem rather odd, but I think the “pretty pictures” aspect (mentioned by another commenter) explains that well enough.

Leave a comment