Category Archives: History of Logic

Bertrand Russell did not write Principia Mathematica

Yesterday would have been Bertrand Russell’s 144th birthday and numerous people on the Internet took notice of the occasion. Unfortunately several of them, including some who should know better, included in their brief descriptions of his life and work the fact that he was the author of Principia Mathematica, he wasn’t. At this point some readers will probably be thinking that I have gone mad. Anybody who has an interest in the history of modern mathematics and logic knows that Bertrand Russell wrote Principia Mathematica. Sorry, he didn’t! The three volumes of Principia Mathematica were co-authored by Alfred North Whitehead and Bertrand Russell.

3562

Now you might think that I’m just splitting hairs but I’m not. If you note the order in which the authors are named you will observe that they are not listed alphabetically but that Whitehead is listed first, ahead of Russell. This is because Whitehead being senior to Russell, in both years and status within the Cambridge academic hierarchy, was considered to be the lead author. In fact Whitehead had been both Russell’s teacher, as an undergraduate, and his examiner in his viva voce, where he in his own account gave Russell a hard time because he knew that it was the last time that he would be his mathematical superior.

Alfred North Whitehead

Alfred North Whitehead

Both of them were interested in metamathematics and had published books on the subject: Whitehead’s A Treatise on Universal Algebra (1898) and Russell’s The Principles of Mathematics (1903). Both of them were working on second volumes of their respective works when they decided to combine forces on a joint work the result of the decision being the monumental three volumes of Principia Mathematica (Vol. I, 1910, Vol. II, 1912, Vol. III, 1913). According to Russell’s own account the first two volumes where a true collaborative effort, whilst volume three was almost entirely written by Whitehead.

Bertrand Russell 1907 Source: Wikimedia Commons

Bertrand Russell 1907
Source: Wikimedia Commons

People referring to Russell’s Principia Mathematica instead of Whitehead’s and Russell’s Principia Mathematica is not new but I have the feeling that it is becoming more common as the years progress. This is not a good thing because it is a gradual blending out, at least on a semi-popular level, of Alfred Whitehead’s important contributions to the history of logic and metamathematics. I think this is partially due to the paths that their lives took after the publication of Principia Mathematica.

The title page of the shortened version of the Principia Mathematica to *56 Source: Wikimedia Commons

The title page of the shortened version of the Principia Mathematica to *56
Source: Wikimedia Commons

Whilst Russell, amongst his many other activities, remained very active at the centre of the European logic and metamathematics community, Whitehead turned, after the First World War, comparatively late in life, to philosophy and in particular metaphysics going on to found what has become known as process philosophy and which became particularly influential in the USA.

In history, as in academia in general, getting your facts right is one of the basics, so if you have occasion to refer to Principia Mathematica then please remember that it was written by Whitehead and Russell and not just by Russell and if you are talking about Bertrand Russell then he was co-author of Principia Mathematica and not its author.

14 Comments

Filed under History of Logic, History of Mathematics

Boole, Shannon and the Electronic Computer

Photo of George Boole by Samuel Prout Newcombe  Source: Wikimedia Commons

Photo of George Boole by Samuel Prout Newcombe
Source: Wikimedia Commons

In 1847, the self-taught English Mathematician George Boole (1815–1864), whose two hundredth birthday we celebrated last year, published a very small book, little more than a pamphlet, entitled Mathematical Analysis of Logic. This was the first modern book on symbolic or mathematical logic and contained Boole’s first efforts towards an algebraic logic of classes.

6882068-M

Although very ingenious and only the second published non-standard algebra, Hamilton’s Quaternions was the first, Boole’s work attracted very little attention outside of his close circle of friends. His friend, Augustus De Morgan, would falsely claim that his own Formal Logic Boole’s work were published on the same day, they were actually published several days apart, but their almost simultaneous appearance does signal a growing interest in formal logic in the early nineteenth century. Boole went on to publish a much improved and expanded version of his algebraic logic in his An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities in 1854.

LoT-GB200-news-story-mag

The title contains an interesting aspect of Boole’s work in that it is an early example of structural mathematics. In structural mathematics, mathematicians set up formal axiomatic systems, which are capable of various interpretations and investigate the properties of the structure rather than any one specific interpretation, anything proved of the structure being valid for all interpretations. Structural mathematics lies at the heart of modern mathematics and its introduction is usually attributed to David Hilbert, but in his Laws of Thought, Boole anticipated Hilbert by half a century. The title of the book already mentions two interpretations of the axiomatic system contained within, logic and probability and the book actually contains more, in the first instance Boole’s system is a two valued logic of classes or as we would probably now call it a naïve set theory. Again despite its ingenuity the work was initially largely ignored till after Boole’s death ten years later.

As the nineteenth century progressed the interest in Boole’s algebraic logic grew and his system was modified and improved. Most importantly, Boole’s original logic contained no method of quantification, i.e. there was no simple way of expressing simply in symbols the statements, “there exists an X” or “for all X”, fundamental statements necessary for mathematical proofs. The first symbolic logic with quantification was Gottlob Frege’s, which first appeared in 1879. In the following years both Charles Saunders Peirce in America and Ernst Schröder in German introduced quantification into Boole’s algebraic logic. Both Peirce’s group at Johns Hopkins, which included Christine Ladd-Franklin or rather simply Christine Ladd as she was then, and Schröder produced substantial works of formal logic using Boole’s system. There is a popular misconception that Boole’s logic disappeared without major impact, to be replaced by the supposedly superior mathematical logic of Whitehead and Russell’s Principia Mathematica. This is not true. In fact Whitehead’s earlier pre-Principia work was carried out in Boolean algebra, as were the very important meta-logical works or both Löwenheim and Skolem. Alfred Tarski’s early work was also done in Bool’s algebra and not the logic of PM. PM first supplanted Boole with the publication of Hilbert’s and Ackermann’s Grundzüge der theoretischen Logik published in 1928.

It now seemed that Boole’s logic was destined for the rubbish bin of history, a short-lived curiosity, which was no longer relevant but that was to change radically in the next decade in the hands of an American mathematical prodigy, Claude Shannon who was born 30 April 1916.

Claude Shannon Photo by Konrad Jacobs Source: Wikimedia Commons (Konrad Jacobs was one of my maths teachers and a personal friend)

Claude Shannon
Photo by Konrad Jacobs
Source: Wikimedia Commons
(Konrad Jacobs was one of my maths teachers and a personal friend)

Shannon entered the University of Michigan in 1932 and graduated with a double bachelor’s degree in engineering and mathematics in 1936. Whilst at Michigan University he took a course in Boolean logic. He went on to MIT where under the supervision of Vannevar Bush he worked on Bush’s differential analyser, a mechanical analogue computer designed to solve differential equations. It was whilst he as working on the electrical circuitry for the differential analyser that Shannon realised that he could apply Boole’s algebraic logic to electrical circuit design, using the simple two valued logical functions as switching gates in the circuitry. This simple but brilliant insight became Shannon’s master’s thesis in 1937, when Shannon was just twenty-one years old. It was published as a paper, A Symbolic Analysis of Relay and Switching Circuits, in the Transactions of the American Institute of Electrical Engineers in 1938. Described by psychologist Howard Gardner as, “possibly the most important, and also most famous, master’s thesis of the century” this paper formed the basis of all future computer hardware design. Shannon had delivered the blueprint for what are now known as logic circuits and provided a new lease of life for Boole’s logical algebra.

17228_2

Later Shannon would go on to become on of the founders of information theory, which lies at the heart of the computer age and the Internet but it was that first insight combining Boolean logic with electrical circuit design that first made the computer age a viable prospect. Shannon would later play down the brilliance of his insight claiming that it was merely the product of his having access to both areas of knowledge, Boolean algebra and electrical engineering, and thus nothing special but it was seeing that the one could be interpreted as the other, which is anything but an obvious step that makes the young Shannon’s insight one of the greatest intellectual breakthroughs of the twentieth century.

7 Comments

Filed under History of Computing, History of Logic

Mega inanity

Since the lead up to the Turing centennial in 2012 celebrating the birth of one of the great meta-mathematicians of the twentieth century, Alan Mathison Turing, I have observed with increasing horror the escalating hagiographic accounts of Turing’s undoubted historical achievements and the resulting perversion of the histories of twentieth-century science, mathematics and technology and in particular the history of computing.

This abhorrence on my part is not based on a mere nodding acquaintance with Turing’s name but on a deep and long-time engagement with the man and his work. I served my apprenticeship as a historian of science over many years in a research project on the history of formal or mathematical logic. Formal logic is one of the so-called formal sciences the others being mathematics and informatics (or computer science). I have spent my whole life studying the history of mathematics with a special interest in the history of computing both in its abstract form and in its technological realisation in all sorts of calculating aids and machines. I also devoted a substantial part of my formal study of philosophy to the study of the philosophy of mathematics and the logical, meta-logical and meta-mathematical problems that this discipline, some would say unfortunately, generates. The history of all of these intellectual streams flow together in the first half of the twentieth century in the work of such people as Leopold Löwenheim, Thoralf Skolem, Emil Post, Alfred Tarski, Kurt Gödel, Alonso Church and Alan Turing amongst others. These people created a new discipline known as meta-mathematics whilst carrying out a programme delineated by David Hilbert.

Attempts to provide a solid foundation for mathematics using set theory and logic had run into serious problems with paradoxes. Hilbert thought the solution lay in developing each mathematical discipline as a strict axiomatic systems and then proving that each axiomatic system possessed a set of required characteristics thus ensuring the solidity and reliability of a given system. This concept of proving theories for complete axiomatic systems is the meta- of meta-mathematics. The properties that Hilbert required for his axiomatic systems were consistency, which means the systems should be shown to be free of contradictions, completeness, meaning that all of the theorems that belong to a particular discipline are deductible from its axiom system, and finally decidability, meaning that for any well-formed statement within the system it should be possible to produced an algorithmic process to decide if the statement is true within the axiomatic system or not. An algorithm is like a cookery recipe if you follow the steps correctly you will produce the right result.

The meta-mathematicians listed above showed by very ingenious methods that none of Hilbert’s aims could be fulfilled bringing the dream of a secure foundation for mathematics crashing to the ground. Turing’s solution to the problem of decidability is an ingenious thought experiment, for which he is justifiably regarded as one of the meta-mathematical gods of the twentieth century. It was this work that led to him being employed as a code breaker at Bletchley Park during WW II and eventually to the fame and disaster of the rest of his too short life.

Unfortunately the attempts to restore Turing’s reputation since the centenary of his birth in 2012 has led to some terrible misrepresentations of his work and its consequences. I thought we had reach a low point in the ebb and flow of the centenary celebrations but the release of “The Imitation Game”, the Alan Turing biopic, has produced a new series of false and inaccurate statements in the reviews. I was pleasantly pleased to see several reviews, which attempt to correct some of the worst historical errors in the film. You can read a collection of reviews of the film in the most recent edition of the weekly histories of science, technology and medicine links list Whewell’s Gazette. Not having seen the film yet I can’t comment but I was stunned when I read the following paragraph from the abc NEWS review of the film written by Alyssa Newcomb. It’s so bad you can only file it under; you can’t make this shit up.

The “Turing Machine” was the first modern computer to logically process information, running on interchangeable software and essentially laying the groundwork for every computing device we have today — from laptops to smartphones.

Before I analyse this train wreck of a historical statement I would just like to emphasise that this is not the Little Piddlington School Gazette, whose enthusiastic but slightly slapdash twelve-year-old film critic got his facts a little mixed up, but a review that appeared on the website of a major American media company and as such totally unacceptable however you view it.

The first compound statement contains a double whammy of mega-inane falsehood and I had real problems deciding where to begin and finally plumped for the “first modern computer to logically process information, running on interchangeable software”. Alan Turing had nothing to do with the first such machine, the honour going to Konrad Zuse’s Z3, which Zuse completed in 1941. The first such machine in whose design and construction Alan Turing was involved was the ACE produced at the National Physical Laboratory, in London, in 1949. In the intervening years Atanasoff and Berry, Tommy Flowers, Howard Aikin, as well as Eckert and Mauchly had all designed and constructed computers of various types and abilities. To credit Turing with the sole responsibility for our digital computer age is not only historically inaccurate but also highly insulting to all the others who made substantial and important contributions to the evolution of the computer. Many, many more than I’ve named here.

We now turn to the second error contained in this wonderfully inane opening statement and return to the subject of meta-mathematics. The “Turing Machine” is not a computer at all its Alan Turing’s truly genial thought experiment solution to Hilbert’s decidability problem. Turing imagined a very simple machine that consists of a scanning-reading head and an infinite tape that runs under the scanning head. The head can read instructions on the tape and execute them, moving the tape right or left or doing nothing. The question then reduces to the question, which set of instructions on the tape come eventually to a stop (decidable) and which lead to an infinite loop (undecidable). Turing developed this idea to a machine capable of computing any computable function (a universal Turing Machine) and thus created a theoretical model for all computers. This is of course a long way from a practical, real mechanical realisation i.e. a computer but it does provide a theoretical measure with which to describe the capabilities of a mechanical computing device. A computer that is the equivalent of a Universal Turing Machine is called Turing complete. For example, Zuse’s Z3 was Turing complete whereas Colossus, the computer designed and constructed by Tommy Flowers for decoding work at Bletchley Park, was not.

Turing’s work played and continues to play an important role in the theory of computation but historically had very little effect on the development of real computers. Attributing the digital computer age to Turing and his work is not just historically wrong but is as I already stated above highly insulting to all of those who really did bring about that age. Turing is a fascinating, brilliant, and because of what happened to him because of the persecution of homosexuals, tragic figure in the histories of mathematics, logic and computing in the twentieth century but attributing achievements to him that he didn’t make does not honour his memory, which certainly should be honoured, but ridicules it.

I should in fairness to the author of the film review, that I took as motivation from this post, say that she seems to be channelling misinformation from the film distributors as I’ve read very similar stupid claims in other previews and reviews of the film.

15 Comments

Filed under History of Computing, History of Logic, History of Mathematics, Myths of Science

5 Brilliant Mathematicians – 4 Crappy Commentaries

I still tend to call myself a historian of mathematics although my historical interests have long since expanded to include a much wider field of science and technology, in fact I have recently been considering just calling myself a historian to avoid being pushed into a ghetto by those who don’t take the history of science seriously. Whatever, I have never lost my initial love for the history of mathematics and will automatically follow any link offering some of the same. So it was that I arrived on the Mother Nature Network and a blog post titled 5 brilliant mathematicians and their impact on the modern world. The author, Shea Gunther, had actually chosen 5 brilliant mathematicians with Isaac Newton, Carl Gauss, John von Neumann, Alan Turing and Benoit Mandelbrot and had even managed to avoid the temptation of calling them ‘the greatest’ or something similar. However a closer examination of his commentaries on his chosen subjects reveals some pretty dodgy not to say down right crappy claims, which I shall now correct in my usual restrained style.

He starts of fairly well on Newton with the following:

There aren’t many subjects that Newton didn’t have a huge impact in — he was one of the inventors of calculus, built the first reflecting telescope and helped establish the field of classical mechanics with his seminal work, “Philosophiæ Naturalis Principia Mathematica.” He was the first to decompose white light into its constituent colors and gave us, the three laws of motion, now known as Newton’s laws.

But then blows it completely with his closing paragraph:

We would live in a very different world had Sir Isaac Newton not been born. Other scientists would probably have worked out most of his ideas eventually, but there is no telling how long it would have taken and how far behind we might have fallen from our current technological trajectory.

This is the type of hagiographical claim that fans of great scientists tend to make who have no real idea of the context in which their hero worked. Let’s examine step by step each of the achievements of Newton listed here and see if the claim made in this final paragraph actually holds up.

Ignoring the problems inherent in the claim that Newton invented calculus, which I’ve discussed here, the author acknowledges that Newton was only co-inventor together with Leibniz and although Newton almost certainly developed his system first it was Leibniz who published first and it was his system that spread throughout Europe and eventually the world so no changes here if Isaac had not been born.

Newton did indeed construct the first functioning reflecting telescope but as I explained here it was by no means the first. It would also be fifty years before John Hadley succeeded in repeating Newton’s feat and finally making the commercial production of reflecting telescopes viable. However Hadley also succeeded in making working models of James Gregory’s reflecting telescope, which actually predated Newton’s and it was the Gregorian that, principally in the hands of James Short, became the dominant model in the eighteenth century. Although to be fair one should mention that William Herschel made his discoveries with Newtonians. Once again our author’s claim fails to hold water.

Sticking with optics for the moment it is a little know and even less acknowledge fact that the Bohemian physicus and mathematician Jan Marek Marci (1595 – 1667) actually decomposed white light into its constituent colours before Newton. Remaining for a time with optics, James Gregory, Francesco Maria Grimaldi, Christian Huygens and Robert Hooke were all on a level with Newton although none of them wrote such an influential book as Newton’s Optics on the subject. Now this was not all positive. Due to the influence won through the Principia, The Optics became all dominant preventing the introduction of the wave theory of light developed by Huygens and Hooke and even slowing down its acceptance in the nineteenth century when proposed by Fresnel and Young. If Newton hadn’t been born optics might even have developed and advance more quickly than it did.

This just leaves the field of classical mechanics Newton real scientific monument. Now, as I’ve pointed out several times before the three laws of motion were all borrowed by Newton from others and the inverse square law of gravity was general public property in the second half of the seventeenth century. Newton’s true genius lay in his mathematical combination of the various elements to create a whole. Now the question is how quickly might this synthesis come about had Newton never lived. Both Huygens and Leibniz had made substantial contribution to mechanics contemporaneously with Newton and the succeeding generation of French and Swiss-German mathematicians created a synthesis of Newton’s, Leibniz’s and Huygens’ work and it is this that is what we know as the field of classical mechanics. Without Newton’s undoubtedly massive contribution this synthesis might have taken a little longer to come into being but I don’t think the delay would have radically changed the world in which we live.

Like almost all great scientists Newton’s discoveries were of their time and he was only a fraction ahead of and sometimes even behind his rivals. His non-existence would probably not have had that much impact on the development of history.

Moving on to Gauss we will have other problems. Our author again makes a good start:

Isaac Newton is a hard act to follow, but if anyone can pull it off, it’s Carl Gauss. If Newton is considered the greatest scientist of all time, Gauss could easily be called the greatest mathematician ever.

Very hyperbolic and hagiographic but if anybody could be called the greatest mathematician ever then Gauss would be a serious candidate. However in the next paragraph we go off the rails. The paragraph starts OK:

Carl Friedrich Gauss was born to a poor family in Germany in 1777 and quickly showed himself to be a brilliant mathematician. He published “Arithmetical Investigations,” a foundational textbook that laid out the tenets of number theory (the study of whole numbers).

So far so good but then our author demonstrates his lack of knowledge of the subject on a grand scale:

Without number theory, you could kiss computers goodbye. Computers operate, on a the most basic level, using just two digits — 1 and 0

Here we have gone over to the binary number system, with which Gauss book on number theory has nothing to do, what so ever. In modern European mathematics the binary number system was first investigated in depth by Gottfried Leibniz in 1679 more than one hundred years before Gauss wrote his Disquisitiones Arithmeticae, which as already stated has nothing on the subject. The use of the binary number system in computing is an application of the two valued symbolic logic of George Boole the 1 and 0 standing for true and false in programing and on and off in circuit design. All of which has nothing to do with Gauss. Gauss made so many epochal contributions to mathematics, physics, cartography, surveying and god knows what else so why credit him with something he didn’t do?

Moving on to John von Neumann we again have a case of credit being given where credit is not due but to be fair to our author, this time he is probably not to blame for this misattribution.  Our author ends his von Neumann description as follows:

Before his death in 1957, von Neumann made important discoveries in set theory, geometry, quantum mechanics, game theory, statistics, computer science and was a vital member of the Manhattan Project.

This paragraph is fine and if Shea Gunther had chosen to feature von Neumann’s invention of game theory or three valued quantum logic I would have said fine, praised the writer for his knowledge and moved on without comment. However instead our author dishes up one of the biggest myths in the history of the computer.

he went on to design the architecture underlying nearly every single computer built on the planet today. Right now, whatever device or computer that you are reading this on, be it phone or computer, is cycling through a series of basic steps billions of times over each second; steps that allow it to do things like render Internet articles and play videos and music, steps that were first thought up by John von Neumann.

Now any standard computer is called a von Neumann machine in terms of its architecture because of a paper that von Neumann published in 1945, First Draft of a Report on the EDVAC. This paper described the architecture of the EDVAC one of the earliest stored memory computers but von Neumann was not responsible for the design, the team led by Eckert and Mauchly were. Von Neumann had merely described and analysed the architecture. His publication caused massive problems for the design team because the information now being in the public realm it meant that they were no longer able to patent their innovations. Also von Neumann’s name as author on the report meant that people, including our author, falsely believed that he had designed the EDVAC. Of historical interest is the fact that Charles Babbage’s Analytical Engine in the nineteenth century already possessed von Neumann architecture!

Unsurprisingly we walk straight into another couple of history of the computer myths when we turn to Alan Turing.  We start with the Enigma story:

During World War II, Turing bent his brain to the problem of breaking Nazi crypto-code and was the one to finally unravel messages protected by the infamous Enigma machine.

There were various versions of the Enigma machine and various codes used by different branches of the German armed forces. The Polish Cipher Bureau were the first to break an Enigma code in 1932. Various other forms of the Enigma codes were broken by various teams at Bletchley Park without Turing. Turing was responsible for cracking the German Naval Enigma. The statement above denies credit to the Polish Cipher Bureau and the other 9000 workers in Bletchley Park for their contributions to encoding Enigma.

Besides helping to stop Nazi Germany from achieving world domination, Alan Turing was instrumental in the development of the modern day computer. His design for a so-called “Turing machine” remains central to how computers operate today.

I’ve lost count of how many times that I’ve seen variations on the claim in the above paragraph in the last eighteen months or so, all equally incorrect. What such comments demonstrate is that their authors actually have no idea what a Turing machine is or how it relates to computer design.

In 1936 Alan Turing, a mathematician, published a paper entitled On Computable Numbers, with an Application to the Entscheidungsproblem. This was in fact one of four contemporaneous solutions offered to a problem in meta-mathematics first broached by David Hilbert, the Entscheidungsproblem. The other solutions, which needn’t concern us here, apart from the fact that Post’s solution is strongly similar to Turing’s, were from Kurt Gödel, Alonso Church and Emil Post. Entscheidung is the German for decision and the Entscheidungsproblem asks if for a given axiomatic system whether it is also possible with the help of an algorithm to decide if a given statement in that axiom system is true or false. The straightforward answer that all four men arrived at by different strategies is that it isn’t. There will always be undecidable statements within any sufficiently complex axiomatic system.

Turing’s solution to the Entscheidungsproblem is simple, elegant and ingenious. He hypothesised a very simple machine that was capable of reading a potentially infinite tape and following instruction encoded on that tape. Instruction that moved the tape either right or left or simply stopped the whole process. Through this analogy Turing was able to show that within an axiomatic system some problems would never be Entscheidbar or in English decidable. What Turing’s work does is, on a very abstract level, to delineate the maximum computability of any automated calculating system. Only much later, in the 1950s, after the invention of electronic computers a process in which Turing also played a role did it occur to people to describe the computational abilities of real computers with the expression ‘Turing machine’.  A Turing machine is not a design for a computer it is term used to described the capabilities of a computer.

To be quite open and honest I don’t know enough about Benoit Mandelbrot and fractals to be able to say whether our author at least got that one right, so I’m going to cut him some slack and assume that he did. If he didn’t I hope somebody who knows more about the subject that I will provide the necessary corrections in the comments.

All of the errors listed above are errors that could have been easily avoided if the author of the article had cared in anyway about historical accuracy and truth. However as is all to often the case in the history of science or in this case mathematics people are prepared to dish up a collection of half baked myths, misconceptions and not to put too fine a point on it crap and think they are performing some sort of public service in doing so. Sometimes I despair.

 

21 Comments

Filed under History of Computing, History of Logic, History of Mathematics, History of Optics, History of Physics, History of science, Myths of Science, Newton

Killed by Homeopathy

The mathematician, philosopher and logician George Boole died on the 8th December 1864. What most people don’t realise is that he was in all probability killed by homeopathy.

In 1849 Boole, a self-taught mathematician and school master, was appointed Professor of Mathematics at the newly founded Queen’s College Cork and it was here in 1850 that he first met Mary Everest, niece of the military surveyor Colonel George Everest after whom the mountain is named, who was visiting another of her uncles, John Ryall who was Professor of Greek at Cork.  The family name, by the way, is pronounced Eve – rest and not Ever – rest. From 1852 on George became Mary’s maths tutor and when her father died in 1855 the two of them married. Despite a fairly large difference in age it was a happy marriage that produced five rather special daughters, who I might blog about another time.

Mary Everest Boole was a highly intelligent woman who after the death of her husband, she lived for another 52 years, would go on to become a noted educationalist who today is something of a feminist icon. She had, however, at least one fatal flaw. Mary’s father had been a devoted disciple of Samuel Hahnemann and she spent a large part of her childhood living in Hahnemann’s house in France where she too became an adherent of his medical philosophy.

The Boole’s lived outside of Cork and one day when walking home from work George got drenched in a downpour and developed a chill. Mary following Hahnemann’s guiding principle that “like cures like” wrapped her ailing husband in wet bed sheets. George developed pneumonia and died. This story is not based on hearsay or a popular myth but the written testimony of one of their daughters who never forgave her mother for having, in her opinion, killed her father.

The next time somebody tells you that homeopathy is harmless you can tell them that it killed one of the greatest mathematical minds of the nineteenth century on whose algebraic logic both the soft- and the hardware of your computer function.

76 Comments

Filed under History of Logic, History of Mathematics, History of science

Cantor Redux

I got criticised on my twitter stream for the Cantor article I posted yesterday. I was not called to order for being to harsh, @ianppreston criticised me, quite correctly, for not being harsh enough! As I don’t wish to create the impression that I’m becoming a wimp in my old age I thought I would give Ms Inglis-Arkell another brief kicking.

Strangely my major objection from yesterday has mysteriously disappeared from her post (did somebody tip her off that she was making a fool of herself?) but there remains enough ignorance and stupidity to amuse those with some knowledge of Cantorian set theory and transfinite arithmetic, knowledge, which Ms Inglis-Arkell apparently totally lacks.

Ms Inglis-Arkell’s dive into the depths of advance mathematics starts so:

Imagine a thin line, almost a thread, stretching to infinity in both directions. It runs to the end of the universe. It is, in essence, infinite. Now look at the space all around it. That also runs to the end of the universe. It’s also infinite. Both are infinite, yes, but are they the same? Isn’t one infinity bigger than the other?

 The answer to the second question is actually no! Cantor demonstrated, counter-intuitively, that the number of points on a straight, the number of points in a square and the number of points in a cube are all infinite, all equal and all equal to ‘c’ the cardinality of the real numbers and the power set of aleph-nought.

After defining the infinite number of natural numbers as aleph-nought Ms Inglis-Arkell then writes the following:

But then what about real numbers? Real numbers include rational numbers, and irrational numbers (like the square root of five), and integers. This has to be a greater infinite number than all the other infinite numbers.

 These three sentences contain three serious errors, one implied and two explicit. The rational numbers includes the integers so to state them separately when describing the real numbers is either wrong or at best tautologous. Secondly, and this is the implicit mistake, a set consisting of the rational numbers and those irrational numbers, which are also algebraic numbers i.e. describable with an algebraic equation for example X2 = 2, is also a countable infinite set that is equal to aleph-nought. Only when one includes the so called transcendental irrational numbers, those that cannot be described with an algebraic equation for example the circle constant π, that the infinite set become larger than aleph-nought. This result is again extremely counter-intuitive, as very few transcendental numbers have ever been identified. The final error is very serious because the cardinal number of the real numbers ‘c’ (for continuum) is by no means “a greater infinite number than all the other infinite numbers.

Cantor could demonstrate that the so-called power set of an infinite set, i.e. the set of all the subsets of the set, has a larger cardinality than the set itself. This newer set also has a larger power set and so on ad infinitum. As stated above c is equal to the power set of aleph-nought. There is in fact an infinite hierarchy of infinite sets each one larger than its predecessor. On of the great mysteries of Cantorian set theory is where exactly c fits into this hierarchy. Cantor asked the question whether c is equal to aleph-one where aleph-one is defined as the the cardinality of the set of all countable ordinal numbers (1)? He himself was not able to answer this question. It later turned out that it is in fact an undecidable question. In the axiomatic version of Cantorian set theory, the theory is consistent, i.e. free of contradictions, both when c is assumed to be equal to aleph-one and when they are assumed to be not equal. This produces two distinct set theories, the first with c equal to aleph-one is called Cantorian the other non-Cantorian.

Although my sketch of Cantorian set theory and transfinite arithmetic is only very basic I hope I have said enough to show that it is really not a subject about which one should write if, as appears to be the case with Ms Inglis-Arkell, one doesn’t have the necessary knowledge.

(1) Going beyond this and explaining exactly what this means goes futher than is healthy in normal life. For those who are curious I recommend Rudy Rucker’s Infinity and the Mind, Birkhäuser, 1982

Later additions: I have corrected the mistakes kindly pointed out by Sniffnoy in the comments. Note to self: Turn brain on before skating on the thin ice of transfinite arithmetic.

6 Comments

Filed under History of Logic, History of Mathematics

The Cult of St Alan of Bletchley Park

I realise that to rail against anything published in the Daily Fail is about as effective as pissing against the wind in a force 8 gale but this article on Alan Turing got so up my nose that I have decided to strap on my bother-boots of historical criticism and give the author a good kicking if only to assuage my own frustration. It won’t do any good but it might make me feel better.

Before I start in on a not so subtle demolition job, I should point out that I’m actually a Turing fan who has read and absorbed Andrew Hodges’ excellent Turing biography[1] as well as many books and articles on and by Turing. I have seriously studied his legendary paper on the Entscheidungsproblem[2], I have a copy sitting on my bookshelf, which I understand thoroughly including its significance for Hilbert’s Programme, which the Mail’s journalist almost certainly does not. If I here seem to be seriously challenging Turing’s claims to scientific sainthood it is only in the interests of historical accuracy and not out of any sense of antipathy to the man himself, who would definitely be one of my heroes if I went in for them.

The Mail article opens with a real humdinger of a claim that is so wrong it’s laughable:

Own a laptop, a smartphone or an iPad? If so, you owe it to a man many of us have never heard of – a genius called Alan Turing. ‘He invented the digital world we live in today,’ says Turing’s biographer David Leavitt in a new Channel 4 drama-documentary about the brilliant mathematician.

Sorry folks Alan Turing did not invent the digital world we live in today. In the 1930s Turing was one of several meta-mathematicians who laid the theoretical foundations for computability and although his contributions were, viewed from a technical standpoint, brilliant we would still have had the computer revolution if Alan Turing as an undergraduate had turned his undoubted talents to deciphering ancient Sumerian clay tablets instead of to solving meta-logical problems. The German computer pioneer Konrad Zuse designed and built functioning digital computers in the 1930s and 40s without, as far I know, ever having heard of Turing. Zuse was an engineer and not a mathematician and approached the problem from a purely practical point of view. The American engineer Vannevar Bush built a highly advanced analogue computer, his Differential Analyser, to solve differential equations in 1927 when Turing was still at school. Claude Shannon who laid the foundations of digital circuit design was one of Bush’s graduate students. All three American groups, which developed digital computers in the late 193os and early 1940s, Atanasoff and Berry in Iowa, Aiken in Harvard and Eckert and Mauchly in Pennsylvania all referenced Bush when describing their motivations saying they wished to construct an improved version of his Differential Analyser. As far as I know none of them had read Turing’s paper, which is not surprising as Turing himself claimed that in the 1930’s only two people had responded to his paper. The modern computer industry mainly developed out of the work of these three American groups and not from anything produced by Turing.

Turing did do work on real digital computers at Bletchley Park in the 1940s but this work was kept secret by the British government after the war and so had no influence on the civil development of computing in the 1950s and 60s. Turing like many other of the computer pioneers from Bletchley started again from scratch after the war but due to their delayed start and underfunding they never really successfully competed with the Americans. We now turn to Bletchley Park and Turing’s contribution to the Allied war effort. The Mail writes:

Ironically, the same society that hounded him to his death owed its survival to him. For during the Second World War it was Turing who pioneered the cracking of Nazi military codes at Bletchley Park, allowing the Allies to anticipate every move the Germans made.

The first sentence is a reference to Turing’s suicide caused by his mistreatment as a homosexual, which I’m not going to discuss here other than to say that it’s a very black mark against my country and my countrymen. We now come to a piece of pure hagiography. The cracking of the Germany military codes was actually pioneered by the Poles before Bletchley Park even got in on the act. It should also be pointed out that Turing was one of nine thousand people working in Bletchley by the end of the War. Also he was only in charge of one team working on one of the codes in use, the navel naval Enigma, there were several other teams working on the other German codes. Turing was one cog in a vast machine, an important cog but a long way from being the whole show. The Mail next addresses Turing’s famous paper:

‘While still a student at Cambridge he wrote a paper called Computing Machinery, in which all the developments of modern computer science are foretold. If you take an iPhone to pieces, all the parts in there were anticipated by Turing in the 1930s.’

He wasn’t a student but a postgraduate fellow of his college. The title of the paper is On Computable Numbers, with an Application to the Entscheidungsproblem. It outlines some of the developments of modern computing but not all and no he didn’t anticipate all of the parts of an iPhone. Apart from that the paragraph is correct.

Turing’s outstanding talents were recognised at the outbreak of war, when he was plucked from academic life at Cambridge to head the team at Bletchley Park, codenamed Station X. They were tasked with breaking the German codes, transmitted on complex devices called Enigma machines, which encrypted words into as many as 15 million million possible combinations.

‘Turing took one look at Enigma and said, “I can crack that,”’ says Sen. ‘And he did.’ Part of Turing’s method was to develop prototype computers to decipher the Enigma codes, enabling him to do in minutes what would take a team of scientists months to unravel. It was thanks to him that the movements of German U-boats could be tracked and the battle for control of the Atlantic was won, allowing supplies to reach Britain and saving us from starvation.

Turing was not “plucked from academic life”; interested in the mathematics of cryptology Turing started working for the Government Code and Cypher School already in 1938 and joined the staff of Bletchley Park at the outbreak of war. The department he headed was called Hut 8. With reference to the computers developed in Bletchley, as I have already said in an earlier post Turing was responsible for the design of special single purpose computer, the Bombe, which was actually a development from the earlier Polish computer the Bomba and had nothing to do with the much more advance and better known Bletchley invention, the Colossus. The second paragraph is largely correct.

The life and work of Alan Turing and the role of Bletchley Park in the war effort are both important themes in the histories of science, mathematics and technology and can certainly be used as good examples on which to base popular history but nobody is served by the type of ignorant, ill-informed rubbish propagated by the Daily Fail and obviously by the Channel 4 documentary that they are reporting on, which is obviously being screened tonight. My recommendation don’t watch it!

 


[1] Andrew Hodges, Alan Turing: The Enigma, Simon & Schuster, 1983.

[2] Alan M. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem, Proceedings of the London Mathematical Society, 2 ser. vol. 42, (1936 – 37), pp. 230 – 265.

14 Comments

Filed under History of Computing, History of Logic, History of Mathematics