# Category Archives: History of Logic

## Cantor Redux

I got criticised on my twitter stream for the Cantor article I posted yesterday. I was not called to order for being to harsh, @ianppreston criticised me, quite correctly, for not being harsh enough! As I don’t wish to create the impression that I’m becoming a wimp in my old age I thought I would give Ms Inglis-Arkell another brief kicking.

Strangely my major objection from yesterday has mysteriously disappeared from her post (did somebody tip her off that she was making a fool of herself?) but there remains enough ignorance and stupidity to amuse those with some knowledge of Cantorian set theory and transfinite arithmetic, knowledge, which Ms Inglis-Arkell apparently totally lacks.

Ms Inglis-Arkell’s dive into the depths of advance mathematics starts so:

Imagine a thin line, almost a thread, stretching to infinity in both directions. It runs to the end of the universe. It is, in essence, infinite. Now look at the space all around it. That also runs to the end of the universe. It’s also infinite. Both are infinite, yes, but are they the same? Isn’t one infinity bigger than the other?

The answer to the second question is actually no! Cantor demonstrated, counter-intuitively, that the number of points on a straight, the number of points in a square and the number of points in a cube are all infinite, all equal and all equal to ‘c’ the cardinality of the real numbers and the power set of aleph-nought.

After defining the infinite number of natural numbers as aleph-nought Ms Inglis-Arkell then writes the following:

But then what about real numbers? Real numbers include rational numbers, and irrational numbers (like the square root of five), and integers. This has to be a greater infinite number than all the other infinite numbers.

These three sentences contain three serious errors, one implied and two explicit. The rational numbers includes the integers so to state them separately when describing the real numbers is either wrong or at best tautologous. Secondly, and this is the implicit mistake, a set consisting of the rational numbers and those irrational numbers, which are also algebraic numbers i.e. describable with an algebraic equation for example X2 = 2, is also a countable infinite set that is equal to aleph-nought. Only when one includes the so called transcendental irrational numbers, those that cannot be described with an algebraic equation for example the circle constant π, that the infinite set become larger than aleph-nought. This result is again extremely counter-intuitive, as very few transcendental numbers have ever been identified. The final error is very serious because the cardinal number of the real numbers ‘c’ (for continuum) is by no means “a greater infinite number than all the other infinite numbers.

Cantor could demonstrate that the so-called power set of an infinite set, i.e. the set of all the subsets of the set, has a larger cardinality than the set itself. This newer set also has a larger power set and so on ad infinitum. As stated above c is equal to the power set of aleph-nought. There is in fact an infinite hierarchy of infinite sets each one larger than its predecessor. On of the great mysteries of Cantorian set theory is where exactly c fits into this hierarchy. Cantor asked the question whether c is equal to aleph-one where aleph-one is defined as the the cardinality of the set of all countable ordinal numbers (1)? He himself was not able to answer this question. It later turned out that it is in fact an undecidable question. In the axiomatic version of Cantorian set theory, the theory is consistent, i.e. free of contradictions, both when c is assumed to be equal to aleph-one and when they are assumed to be not equal. This produces two distinct set theories, the first with c equal to aleph-one is called Cantorian the other non-Cantorian.

Although my sketch of Cantorian set theory and transfinite arithmetic is only very basic I hope I have said enough to show that it is really not a subject about which one should write if, as appears to be the case with Ms Inglis-Arkell, one doesn’t have the necessary knowledge.

(1) Going beyond this and explaining exactly what this means goes futher than is healthy in normal life. For those who are curious I recommend Rudy Rucker’s Infinity and the Mind, Birkhäuser, 1982

Later additions: I have corrected the mistakes kindly pointed out by Sniffnoy in the comments. Note to self: Turn brain on before skating on the thin ice of transfinite arithmetic.

Filed under History of Logic, History of Mathematics

## The Cult of St Alan of Bletchley Park

I realise that to rail against anything published in the Daily Fail is about as effective as pissing against the wind in a force 8 gale but this article on Alan Turing got so up my nose that I have decided to strap on my bother-boots of historical criticism and give the author a good kicking if only to assuage my own frustration. It won’t do any good but it might make me feel better.

Before I start in on a not so subtle demolition job, I should point out that I’m actually a Turing fan who has read and absorbed Andrew Hodges’ excellent Turing biography[1] as well as many books and articles on and by Turing. I have seriously studied his legendary paper on the Entscheidungsproblem[2], I have a copy sitting on my bookshelf, which I understand thoroughly including its significance for Hilbert’s Programme, which the Mail’s journalist almost certainly does not. If I here seem to be seriously challenging Turing’s claims to scientific sainthood it is only in the interests of historical accuracy and not out of any sense of antipathy to the man himself, who would definitely be one of my heroes if I went in for them.

The Mail article opens with a real humdinger of a claim that is so wrong it’s laughable:

Own a laptop, a smartphone or an iPad? If so, you owe it to a man many of us have never heard of – a genius called Alan Turing. ‘He invented the digital world we live in today,’ says Turing’s biographer David Leavitt in a new Channel 4 drama-documentary about the brilliant mathematician.

Sorry folks Alan Turing did not invent the digital world we live in today. In the 1930s Turing was one of several meta-mathematicians who laid the theoretical foundations for computability and although his contributions were, viewed from a technical standpoint, brilliant we would still have had the computer revolution if Alan Turing as an undergraduate had turned his undoubted talents to deciphering ancient Sumerian clay tablets instead of to solving meta-logical problems. The German computer pioneer Konrad Zuse designed and built functioning digital computers in the 1930s and 40s without, as far I know, ever having heard of Turing. Zuse was an engineer and not a mathematician and approached the problem from a purely practical point of view. The American engineer Vannevar Bush built a highly advanced analogue computer, his Differential Analyser, to solve differential equations in 1927 when Turing was still at school. Claude Shannon who laid the foundations of digital circuit design was one of Bush’s graduate students. All three American groups, which developed digital computers in the late 193os and early 1940s, Atanasoff and Berry in Iowa, Aiken in Harvard and Eckert and Mauchly in Pennsylvania all referenced Bush when describing their motivations saying they wished to construct an improved version of his Differential Analyser. As far as I know none of them had read Turing’s paper, which is not surprising as Turing himself claimed that in the 1930’s only two people had responded to his paper. The modern computer industry mainly developed out of the work of these three American groups and not from anything produced by Turing.

Turing did do work on real digital computers at Bletchley Park in the 1940s but this work was kept secret by the British government after the war and so had no influence on the civil development of computing in the 1950s and 60s. Turing like many other of the computer pioneers from Bletchley started again from scratch after the war but due to their delayed start and underfunding they never really successfully competed with the Americans. We now turn to Bletchley Park and Turing’s contribution to the Allied war effort. The Mail writes:

Ironically, the same society that hounded him to his death owed its survival to him. For during the Second World War it was Turing who pioneered the cracking of Nazi military codes at Bletchley Park, allowing the Allies to anticipate every move the Germans made.

The first sentence is a reference to Turing’s suicide caused by his mistreatment as a homosexual, which I’m not going to discuss here other than to say that it’s a very black mark against my country and my countrymen. We now come to a piece of pure hagiography. The cracking of the Germany military codes was actually pioneered by the Poles before Bletchley Park even got in on the act. It should also be pointed out that Turing was one of nine thousand people working in Bletchley by the end of the War. Also he was only in charge of one team working on one of the codes in use, the navel naval Enigma, there were several other teams working on the other German codes. Turing was one cog in a vast machine, an important cog but a long way from being the whole show. The Mail next addresses Turing’s famous paper:

‘While still a student at Cambridge he wrote a paper called Computing Machinery, in which all the developments of modern computer science are foretold. If you take an iPhone to pieces, all the parts in there were anticipated by Turing in the 1930s.’

He wasn’t a student but a postgraduate fellow of his college. The title of the paper is On Computable Numbers, with an Application to the Entscheidungsproblem. It outlines some of the developments of modern computing but not all and no he didn’t anticipate all of the parts of an iPhone. Apart from that the paragraph is correct.

Turing’s outstanding talents were recognised at the outbreak of war, when he was plucked from academic life at Cambridge to head the team at Bletchley Park, codenamed Station X. They were tasked with breaking the German codes, transmitted on complex devices called Enigma machines, which encrypted words into as many as 15 million million possible combinations.

‘Turing took one look at Enigma and said, “I can crack that,”’ says Sen. ‘And he did.’ Part of Turing’s method was to develop prototype computers to decipher the Enigma codes, enabling him to do in minutes what would take a team of scientists months to unravel. It was thanks to him that the movements of German U-boats could be tracked and the battle for control of the Atlantic was won, allowing supplies to reach Britain and saving us from starvation.

Turing was not “plucked from academic life”; interested in the mathematics of cryptology Turing started working for the Government Code and Cypher School already in 1938 and joined the staff of Bletchley Park at the outbreak of war. The department he headed was called Hut 8. With reference to the computers developed in Bletchley, as I have already said in an earlier post Turing was responsible for the design of special single purpose computer, the Bombe, which was actually a development from the earlier Polish computer the Bomba and had nothing to do with the much more advance and better known Bletchley invention, the Colossus. The second paragraph is largely correct.

The life and work of Alan Turing and the role of Bletchley Park in the war effort are both important themes in the histories of science, mathematics and technology and can certainly be used as good examples on which to base popular history but nobody is served by the type of ignorant, ill-informed rubbish propagated by the Daily Fail and obviously by the Channel 4 documentary that they are reporting on, which is obviously being screened tonight. My recommendation don’t watch it!

[1] Andrew Hodges, Alan Turing: The Enigma, Simon & Schuster, 1983.

[2] Alan M. Turing, On Computable Numbers, with an Application to the Entscheidungsproblem, Proceedings of the London Mathematical Society, 2 ser. vol. 42, (1936 – 37), pp. 230 – 265.

As I have probably mentioned more than once I served my apprenticeship as a historian of science working in a research project on the history of formal or symbolic logic. My special area within the project was British logical algebra in the 19th century and it was here that I took a long deep look at Augustus De Morgan who was born in Madurai in the Madras Presidency, an administrative sub-division of British India on the 27th June 1806. De Morgan was a brilliantly eclectic polymath with a Pythonesque sense of humour who both from his personality and from his appearance seemed to spring out of Charles Dickens’ Pickwick Papers, a mathematical second cousin to Sam Weller. De Morgan is my favourite Victorian.

Augustus De Morgan Source: Wikimedia Commons

Son of an army officer in the service of the East India Company he moved to England whilst only seven months old. At the age of sixteen he went up to Trinity College Cambridge where he quickly became part of the circle around George Peacock and William Whewell who would stimulate his life long interest in mathematics and logic. In 1826 he graduated 4th Wrangler in the mathematical tripos but already a convinced Unitarian he refused to sign the religious declaration required in Oxbridge in those days to graduate MA and so was not eligible for the fellowship for which he would normally have been destined. He went instead to London to study for the bar. However he found law boring and at the age of 21 and with no publication to his name he applied for the chair of mathematics at the newly founded University College London. This new university had been founded by a group of social reformers who felt that a university education should be open to all what ever their religious belief might be, Oxbridge only being open to confirmed Anglicans. Despite his youth and lack of experience De Morgan was appointed University College’s first professor of mathematics in 1828. He resigned the post only three years later on a mater of principle but was reappointed in 1836 and remained professor until 1866 when he again resigned on another mater of principle.

That De Morgan should be identified with an institution of social reform was not a mater of chance and social reform defined much of his life. He became professor of mathematics at the newly founded Queen’s College an institute of higher education for women founded by Frederick Denison Maurice. Most notably he was a highly active member of the Society for the Diffusion of Useful Knowledge an organisation dedicated to making scientific and other knowledge available in cheap, clear and concise printed versions written by the best authors. De Morgan was the most prolific of all the SDUK authors and wrote and published books and articles on a bewildering range of topics. Another of his social reformers contacts was the Unitarian William Frend whose daughter Sophia would become De Morgan’s wife.

De Morgan devoted part of his academic efforts to the reform and modernisation of formal logic, a subject that had been in a sort of coma in England for about three hundred years before being awakened from its slumbers by Richard Whately at the beginning of the 19th century. De Morgan who worked in the traditional syllogistic Aristotelian logic introduced the concept of quantification of the predicate enabling logically conclusion not possible in the traditional logic. This invention led to a bitter dispute with the Scottish philosopher Sir William Hamilton (not to be confused with the Irish mathematician Sir William Hamilton, a good friend of De Morgan’s) who claimed priority for this logical discovery. This dispute attracted the attention of another mathematician, George Boole, who stimulated by the discussion developed his algebraic logic. Boole and De Morgan were not only both disciples of the algebraic innovation of George Peacock and logical pioneers but shared a Unitarian religious outlook and became lifelong friends. De Morgan was especially proud of the fact that his Formal Logic and Boole’s Mathematical Analysis of Logic were published on the same day in 1847. Introducing, in his opinion, a new age in logic. In reality De Morgan was incorrect as the two books were published about a week apart. Although De Morgan’s logical work was by no means as innovative as Boole’s he was the first modern logician to work on the logic of relation an area that was later developed by Charles Sanders Peirce in America and Ernst Schröder in Germany both of whom were great admirers of De Morgan.

De Morgan made significant contributions to many areas of mathematics but his principle achievements were in trigonometry and in abstract algebras. His most lasting contribution was the formalisation of the principle of mathematical induction, an important tool in mathematical proof theory, to which he also contributed the name. Strangely he is best remembered today for De Morgan’s Laws. This is peculiar because the laws were not discovered by De Morgan but had been known both to Aristotle and the mediaeval logicians; De Morgan merely made them better known. The laws are fairly trivial “not (A or B) is equal to not A and not B” and “not (A and B) is equal to not A or not B” but very useful in deductive logical proofs.

De Morgan also made important contributions to the history of science. The Scottish physicist David Brewster wrote and published the first modern English biography of Isaac Newton largely as a reaction to the English translation of the biography by the French physicist Jean – Baptiste Biot, which had been published by the SDUK. De Morgan didn’t like what he saw as Brewster’s Newton hagiography and wrote and published a series of biographical pamphlets on Newton, correcting what he saw as Brewster’s errors. This led to a literary dispute between the two men with both of them digging deeper and deeper into the original sources, Newton’s letters, papers, notebooks etc., in order to prove the correctness of their Newton picture. This development led scientific biography away from literary hagiography towards modern historiography. For the full details of these developments I recommend the very readable account by Rebekah Higgitt in her excellent Recreating Newton.

De Morgan also wrote and published his Arithmetical Books in which he discussed the work of over 1500 authors on the subject. This book is still regarded as an important source in the history of mathematics.

I said that De Morgan had a Pythonesque sense of humour and his letters, papers and notebooks are full of wonderful whimsies. His most famous book is his Budget of Paradoxes.

De Morgan collected the written products of circle squarers and other mathematical fools who he then exposed to ridicule in a series of newspaper articles. These were collected in a book and published after his death. This gem is still in print and is a secret tip amongst philosophers, mathematicians and logicians.

Unlike many of his friends and contemporaries De Morgan was not very active in the numerous scientific societies that flourished in the 19th century. He refused membership of the Royal Society on grounds of principle because he saw it as an elitist organisation. The only society of which he was a member was the Astronomical Society. However when his son George, like his father a gifted mathematician, founded the London Mathematical Society De Morgan became its first President.

De Morgan was a fascinating and stimulating polymath who certainly deserves to be better known than he is. One way you can do that is by getting hold of a copy of the very readable Memoir of Augustus De Morgan by his wife Sophia Elizabeth De Morgan.

## Ich bin ein Gastbloggerin: A special post for International Women’s Day.

My fellow guest blogger Penny Richards wrote in her post on Joyce Kaufman:

Although Johns Hopkins didn’t welcome women students in those days

To celebrate International Women’s Day I thought I would draw the readers’ attention to another earlier women scientist who suffered under the negative attitude to women of Johns Hopkins University. To find out who go here

1 Comment

Filed under History of Logic, History of Mathematics

## Bertie’s Dream

Today’s birthday boy is a mathematician but he is not from the Renaissance and anything but obscure, he is Betrand Arthur William 3rd Earl Russell known to the world of academia as Mr Russell and to his friends as Bertie. An intellectual giant who straddled the 20th century, he was born 18th May 1870 1872 and died aged 97 2nd February 1970. His Wikipedia article lists him as philosopher, logician, mathematician, historian, socialist, pacifist and social critic missing out popular author and educationalist; Russell left an intellectual heritage that touched almost areas of human existence. To produce a potted biography of Russell here would be literally impossible and I don’t intend to try I will just make a few remarks about Russell the logician.

I paid my dues as a historian of science working in a research project on the history of formal logic and Russell, of course, loomed large on the horizon no matter in which direction one looked as a logic historian. A mathematical graduate of Cambridge University Russell was inspired by a meeting with the Italian mathematician Giuseppe Peano at the International Philosophy Conference in Paris in 1900 and set out deduce the whole of mathematics from the axiom of formal logic in order to avoid the foundational crises into which mathematics had been plunged by the set theoretical antinomies generated by Cantorian set theory. Unbeknown to Russell at that point the German mathematician Gottlob Frege was already engaged in the same project. Unfortunately Russell torpedoed both his own and Frege’s efforts with the discovery of the so-called Russell’s paradox in 1901. Russell’s own presentation of the paradox in simple language is the question, ‘in a village where the barber shaves all of those who do not shave themselves who shaves the barber?’ If the barber shaves himself then he doesn’t shave himself however if he doesn’t shave himself then he as the barber must shave himself. Applied to infinite set theory the paradox torpedoes all attempts to define numbers in terms of sets, the basis of both Russell’s and Frege’s work. Russell’s solution to the problem was type theory just one of the monuments that he raised to himself in his long and fruitful life.

Russell presented his logically founded mathematics together with Alfred North Whitehead, who was actually the principle author, in the three volume Principia Mathematica (1910, 1912, 1913). The Cambridge University Press was convinced that the book would be a financial flop and set the price accordingly and only printed 750 copies. However the first edition sold out and they actually made a profit on what is probably the most unreadable ‘best seller’ of all times. A second edition was issued in 1925. Although probably inferior to Frege’s own Grundgesetze der Arithmetik PM set the standard for symbolic mathematical logic because Frege’s two-dimensional Begriffschrift was regarded as incomprehensible by most readers.

Somewhere in his autobiographical writings Russell tells the following story about PM. He has a recurring dream that takes place at some undefined point in the future. He is in a library and a librarian is walking along the stacks selecting books from the selves that are to be deleted and throwing them into a bucket. As Russell watches he reaches the last copy of PM in the world and stops, at this point Russell wakes from his dream…

All fame is transitory.

Filed under History of Logic, History of Mathematics, Uncategorized

## Riffing on an Algorithm

The estimable Thom Levenson author of the excellent Newton and the Counterfeiter, has a piece at The Inverse Square Blog on what he sees as the decline of the Atlantic Monthly; not a subject that would normally cause me to comment here but he includes a justified dismissal of the Atlantic’s abuse of the word algorithm. Now algorithm is a word that truly belongs here so I have decide to expend some thoughts on its meaning, its origins and historical development.

Thom offers up the following working definition for algorithm:

…algorithms involve at a minimum, explicit instructions that can be carried out by a person or a machine which specify operations iterated through a sequence of steps, and produce an unambiguous correct answer (for a certain value of “correct”) within a finite time.

Put simply an algorithm is a recipe for solving problems in the formal sciences, i.e. mathematics, formal logic and computer science. The recipe consists of a finite set of instructions that if carried out in the prescribed order guaranty a solution for the given problem.

Where does the term algorithm come from? It is actually the linguistic corruption of the name of the Uzbek polymath Abū ʿAbdallāh Muḥammad ibn Mūsā al-Khwārizmi. Before I continue I should point out that all Islamic scholars of this period are polymaths and al-Khwarizmi’s ethnicity is disputed, as is the ethnicity of almost all Islamic scholars of this period. Now al-Khwarizmi is most famous as the author of Al-Kitāb al-mukhtaar fī hīsāb al-ğabr wa’l-muqābala the book that introduced the mathematical discipline algebra into Mediaeval Europe and also gave it its name. Algebra is a corruption of al- ğabr which can be translated as ‘set together’ and thus leads to an ‘Algebraist’ in Spanish being a medical bonesetter. al-Khwarizmi also wrote a book introducing the India decimal place value number system of Brahmagupta into Islamic culture and then in translation into Europe. In the Latin translation al-Khwarizmi becomes Algoritmi, which in turn becomes algorism and algorithm.

The first of these terms, algorism, was the name given to learning how to calculate using the so-called Hindu-Arabic number system at the mediaeval universities and also the name given to the text books used; famous and much used algorisms were written by Sacrobosco and Robert Grosseteste. Algorism was taught as a subsidiary to computos, which was  the science of computing the date of Easter a central part of the mathematics instruction at the mediaeval universities. Later by transference the word algorithm can to designate the methods of calculation used with the new number system, which are in fact algorithms in the modern use of the word.

Although the term itself is a product of the thirteenth century algorithmic mathematics is much older and in fact represents the origins of the subject. In all early mathematical cultures, Babylon, Egypt, India and China, mathematics was presented as a collection of recipes for the solution of specific types of problem. The concept of mathematical proof did not exist and neither did any concept of generalisation. These developments are that which make Greek mathematics in antiquity so important, as it was the Greek mathematicians who first developed the idea of proof and of a systematic presentation of an entire mathematical discipline.

In more recent times the term algorithm has become very central to mathematical logic and computer science as Post, Church, Turing and others all worked on Hilbert’s so-called Entscheidungsproblem. Entscheidung is the German for decision and Hilbert posed the question as to whether every mathematical problem is decidable i.e. is there a finite series of predetermined steps that will lead to a solution of  given problem; in other words an algorithm. The answer produced by the three meta-logicians (meta-mathematicians) named above is no, pushing the humble algorithm into centre stage at the junction of mathematics, logic, computing and philosophy.

## One of my worst academic puns!

The Aussie Anthropoid has posted a lovely quote from William Stanley Jevons who explains why the concept ‘essence’ should be banished from philosophical discourse. Jevons was a 19th century philosopher of science, economist and logician. Now I paid my dues as a historian of science working in a research project on the history of formal logic and my special area was the British algebra of logic, with Jevons as one of my subjects. Jevons’ logic was a modified form of the algebraic logic that George Boole had developed in his Mathematical Analysis of Logic (1847) and his Laws of Thought (1864). Nowadays everybody knows Boole’s name because of the pervasiveness of Boolean algebra in the world of computers. However there is a famous paper by Theodore Hailperin (famous amongst historian of logic that is) with the title Boole’s algebra isn’t Boolean algebra, which points out that Boolean algebra is the modified algebra of Jevons and not the original from Boole. Essentially, Jevons replaced Boole’s exclusive ‘or’ (aut in Latin for the logic experts), i.e. A or B but not ‘A and B’, with the inclusive ‘or’ (vel in Latin), i.e. A or B or ‘A and B’ thus making the De Morgan laws valid in this algebra.

Now I once held a public lecture, in a series on the history of the computer, about Boolean algebra and the life and work of George Boole. In due course I explained Hailperin’s point and went on to say that “the two valued algebra of classes that I have been discussing should not be called Boolean but by rights Jevonian but that sounds more like a geological period than a mathematical discipline!“

I think my professor was the only person of the fifty people present who got the joke.