Category Archives: History of Logic

Martin Davis (1928–2023)

As I have mentioned more than once on this blog, I served my apprenticeship as a historian of science working for ten years in a major research project into the external history of formal or mathematical logic. During the semester, we held a weekly research seminar in which one or more of the members of the project would present a talk on the current state of their research. These seminars were held early evening and afterwards we would all go for a meal at a local Italian restaurant. I think, I probably learnt more through the discussions during those meals than through any other part of my life as a student.

From time to time those research seminars would be graced by a guest lecture by visiting historians. Over the years I got to hear lectures by many of the world’s leading historians of logic and mathematics. More important was being able to talk informally with these luminaries of the discipline during those post seminar meals. 

Martin Davis

One of those guest lecturers was Martin Davis, not only one of the best historians of twentieth century meta-logic but also a world class logician in his own right, who died 1 January. He held an excellent lecture on the American logician, Emil Post (1897–1954), who published a paper in 1936 giving an almost identical solution to the Entscheidungsproblem as Turing. 

It was the meal after the lecture that we will forever remain in my memory. Martin was travelling through Europe with his wife and the two of them were incredibly friendly and delightful diner companions.

Martin & Virginia Davis

Two things are particularly present in my mind. The first is being in full flow in my inimitable style answering a question that Martin’s wife Virginia had posed about something in logic, when I suddenly realised that I, a mere student, albeit a mature one, was sitting between two of the world’s leading historians of logic, who were listening intently to what I was saying. Feeling somewhat more than flustered, I somehow managed to finish what I was saying. Nobody said anything negative.

The other was an incredible display of generosity from Martin. Amongst his publications, was a book that he edited called The Undecidable, a collection of the original papers on the topic from Post, Turing, Gödel et al. During the course of the meal, I asked Matin if there were plans to republish it as it was out of print, and I couldn’t get hold of a copy. He asked what I was doing after the meal and if I had time to accompany him back to his hotel. I said yes. When we got there, he gave me his personal copy of The Undecidable. I was mind blown. 

The book has now been republished

Many years later I gave it to my professor, Christian Thiel, who has a very impressive collection of logic books. I was sad when I read of Martin’s death yesterday, remembering a very kind and friendly man, who once did a student a very generous favour.

5 Comments

Filed under Autobiographical, History of Logic

Charles not Ada, Charles not Charles and Ada, just Charles…

The is an old saying in English, “if you’ve got an itch scratch it!” A medically more correct piece of advice is offered, usually by mothers in a loud stern voice, “Don’t scratch!”  I have had an itch since the start of December and have been manfully trying to heed the wise words of mother but have finally cracked and am going to have a bloody good scratch.

I actually don’t wish to dump on Lady Science, which I regard as a usually excellent website promoting the role of women in science, particularly in the history of science but the essay, Before Lovelace, that they posted 3 December 2020 is so full of errors concerning Ada Lovelace and Charles Babbage that I simply cannot ignore it. In and of itself the main point that the concept of the algorithm exists in many fields and did so long before the invention of the computer is interesting and of course correct. In fact, it is a trivial point, that is trivial in the sense of simple and obvious. An algorithm is just a finite, step by step procedure to complete a task or solve a problem, a recipe!

My objections concern the wildly inaccurate claims about the respective roles of Charles Babbage and Ada Lovelace in the story of the Analytical Engine. Let us examine those claims, the essay opens at follows:

Charles Babbage and Ada Lovelace loom large in the history of computing. These famous 19th-century figures are consistently cited as the origin points for the modern day computer: Babbage hailed as the “father of computing” and Lovelace as the “first computer programmer” Babbage was a mathematician, inventor, and engineer, famous for his lavish parties and his curmudgeonly attitude. Lady Augusta Ada King, Countess of Lovelace was a mathematician and scientist, introduced to Babbage when she was a teenager. The two developed a long professional relationship, which included their collaborative work on a machine called the Analytical Engine, a design for the first mechanical, programmable computer.

They might be cited as the origin points of the modern-day computer, but such claims are historically wrong. For all of Babbage’s ingenuity in the design and conception of his mechanical, programmable calculating machines they played absolutely no role in and had no influence on the later development of the computer in the twentieth century. They were and remain an interesting historical anomaly. Regular readers of this blog will know that I reject the use of the expression “the father of” for anything in #histSTM and that for very good reasons. They will also know that I reject Ada Lovelace being called the “first computer programmer” for the very simple reason that she wasn’t. (See addendum below) I am of the opinion that Ada Lovelace was not a mathematician in any meaningful sense of the word, and she was in absolutely no way a scientist. Ada Lovelace and Charles Babbage did not have a long professional relationship and did not collaborate on the design of the Analytical Engine, which was entirely the work of Charles Babbage alone, and in which Ada Lovelace played absolutely no part. Assigning co-authorship and co-development to Ada Lovelace for Babbage’s work is no different to saying that a journalist, who interviews a scientist about his research work and then write a puff piece about it, is the scientist’s co-researcher! The train-wreck continues:

Much of what we know about the Analytical Engine comes from Lovelace’s paper on the machine. In 1842, she published” A Sketch of the Analytical Engine, with notes by the Translator”,” a translation of an earlier article by mathematician Luigi Menabrea. Lovelace’s English translation of Menabrea’s article included her own extended appendix in which she elaborated on the machine’s design and proposed several early computer programs. Her notes were instrumental for Alan Turing’s work on the first modern computer in the 1930s. His work would later provide the basis for the Colossus computer, the world’s first large-scale programmable, electronic, digital computer, developed to assist with cryptography work during World War II. Machines like the Colossus were the precursors to the computers we carry around today in our pockets and our backpacks.

We actually know far more about the Analytical Engine from Babbage’s biography (see footnote 1) and his own extensive papers on it, which were collected and published by his son Henry, Babbage’s Calculating Engines: Being a Collection of Papers Relating to Them; Their History and Construction, Charles Babbage, Edited by Henry P. Babbage, CUP, 1889. The notes to the translation, which the author calls an appendix, we know to have been co-authored by Babbage and Lovelace and not as here stated written by Lovelace alone. There is only one computer program in the notes and that we know to have been written by Babbage and not Lovelace. (See addendum below) Her notes played absolutely no role whatsoever in Turing’s work in the 1930s, which was not on the first modern computer but on a problem in metamathematics, known as the Entscheidungsproblem (English: decision problem). Turing discussed one part of the notes in his paper on artificial intelligence, Computing Machinery and Intelligence, (Mind, October 1950). Turing’s 1930s work had nothing to do with the design of the Colossus, although his work on the use of probability in cryptoanalysis did. Colossus was designed and built by Tommy Flowers, who generally gets far too little credit for his pioneering work in computers. The Colossus played no role in the future development of computers because the British government dismantled or hid all of the Colossus computers from Bletchley Park after the war and closed access to the information on the Colossus for thirty years under the official secret act. We are not done yet:

With Babbage and Lovelace’s work as the foundation and the Turing Machine as the next step toward what we now think of as computers…

Babbage’s work, not Babbage’s and Lovelace’s, was not, as already stated above, the foundation and the Turing Machine was very definitely not the next step towards what we now think of as the computer. I really do wish that people would take the trouble to find out what a Turing Machine really is. It’s an abstract metamathematical concept that is useful for describing, on an abstract level, how a computer works and for defining the computing power or capabilities of a given computer. It played no role in the development of real computers in the 1940s and wasn’t even referenced in the computer industry before the 1950s at the very earliest. Small tip for future authors, if you are going to write about the history of the computer, it pays to learn something about that history before you start. We are approaching the finish line:

One part of the history of computing that is much less familiar is the role the textile industry played in Babbage and Lovelace’s plans for the Analytical Engine. In a key line from Lovelace’s publication, she observes, “we may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves.” The Jacquard Loom was a mechanical weaving system controlled by a chain of punched cards. The punched cards were fed into the weaving loom and dictated which threads were activated as the machine wove each row. The result was an intricate textile pattern that had been “programmed” by the punch cards.

Impressed by the ingenuity of this automation system, Babbage and Lovelace used punched cards as the processing input for the Analytical Engine. The punched cards, Lovelace explains in her notes, contain “the impress of whatever special function we may desire to develop or to tabulate” using the machine.

Why is it that so many authors use ‘less familiar’ or ‘less well known’ about things that are very well known to those, who take an interest in the given topic? For those, who take an interest in Babbage and his computers, the fact that he borrowed the punch card concept from Jacquard’s mechanical, silk weaving loom is very well known. Once again, I must emphasise, Babbage and not Babbage and Lovelace. He adopted the idea of using punch cards to program the Analytical Engine entirely alone, Ada Lovelace was not in anyway involved in this decision.

Itch now successfully scratched! As, I said at the beginning the rest of the essay makes some interesting points and is well worth a read, but I really do wish she had done some real research before writing the totally crap introduction.

Addendum:

I have pointed out on numerous occasions that it was Babbage, who wrote the program for the Analytical Engine to calculate the Bernoulli numbers, as presented in Note G of the Lovelace memoir. He tells us this himself in his autobiography[1]. I have been called a liar for stating this and also challenged to provide evidence by people to lazy to check for themselves, so here are his own words in black and white (16-bit grayscale actually)

Babbage 01

[1] Charles Babbage, Passages from the Life of a Philosopher, Longman, Green, Longman, Roberts, Green, London, 1864, p. 136

4 Comments

Filed under History of Computing, History of Logic

You shouldn’t believe everything you read

One of the things that I have been reading recently is a very interesting paper by John N. Crossley, the Anglo-Australian logician and historian of mathematics, about the reception and adoption of the Hindu-Arabic numbers in medieval Europe.[1]Here I came across this wonderful footnote:[2]

[…]

It is interesting to note that Richard Lemay in his entry “Arabic Numerals,” in Joseph Reese Strayer, ed., Dictionary of the Middle Ages(New York, 1982–89) 1:382–98, at 398 reports that in the University of Padua in the mid-fifteenth century, prices of books should be marked “non per cifras sed per literas claras.” He gives a reference to George Gibson Neill Wright, The Writing of Arabic Numerals(London, 1952), 126. Neill Wright in turn gives a reference to a footnote of Susan Cunnigton, The Story of Arithmetic: A Short History of Its Origin and Development(London, 1904), 42, n. 2. She refers to Rouse Ball’s Short History of Mathematics, in fact this work is: Walter William Rouse Ball, A Short Account of the History of Mathematics, 3rded. (London, 1901), and there one finds on p. 192: “…in 1348 the authorities of the university of Padua directed that a list should be kept of books for sale with the prices marked ‘non per cifras sed per literas claras’ [not by cyphers but by clear letters].” I am yet to find an exact reference for this prohibition. (There is none in Rouse Ball.) Chrisomalis Numerical Notations, p. 124, cites J. Lennart Berggren, “Medieval Arithmetic: Arabic Texts and European Motivations,” in Word, Image, Number: Communication in the Middle Ages, ed. John J. Contreni and Santa Casciani (Florence, 2002), 351–65, at 361, who does not give a reference.

Here we have Crossley the historian following a trail of quotes, references and footnotes; his hunt doesn’t so much terminate in a dead-end as fizzle out in the void, leaving the reader unsure whether the university of Padua really did insist on its book prices being written in Roman numerals rather than Hindu-Arabic ones or not. What we have here is a succession of authors writing up something from a secondary, tertiary, quaternary source with out bothering to check if the claim it makes is actually true or correct by looking for and going back to the original source, which in this case would have been difficult as the trail peters out by Rouse Ball, who doesn’t give a source at all.

This habit of writing up without checking original sources is unfortunately not confined to this wonderful example investigated by John Crossley but is seemingly a widespread bad habit under historians and others who write historical texts.

I have often commented that I served my apprenticeship as a historian of science in a DFG[3]financed research project on Case Studies into a Social History of Formal Logic under the direction of Professor Christian Thiel. Christian Thiel was inspired to launch this research project by a similar story to the one described by Crossley above.

Christian Thiel’s doctoral thesis was Sinn und Bedeutung in der Logik Gottlob Freges(Sense and Reference in Gottlob Frege’s Logic); a work that lifted him into the elite circle of Frege experts and led him to devote his academic life largely to the study of logic and its history. One of those who corresponded with Frege, and thus attracted Thiel interest, was the German meta-logician Leopold Löwenheim, known to students of logic and meta-logic through the Löwenheim-Skolem theorem or paradox. (Don’t ask!) Being a thorough German scholar, one might even say being pedantic, Thiel wished to know Löwenheim’s dates of birth and death. His date of birth was no problem but his date of death turned out to be less simple. In an encyclopaedia article Thiel came across a reference to c.1940; the assumption being that Löwenheim, being a quarter Jewish and as a result having been dismissed from his position as a school teacher in 1933, had somehow perished during the holocaust. In another encyclopaedia article obviously copied from the first the ‘circa 1940’ had become a ‘died 1940’.

Thiel, being the man he is, was not satisfied with this uncertainty and invested a lot of effort in trying to get more precise details of the cause and date of Löwenheim’s death. The Red Cross information service set up after the Second World War in Germany to help trace people who had died or gone missing during the war proved to be a dead end with no information on Löwenheim. Thiel, however, kept on digging and was very surprised when he finally discovered that Löwenheim had not perished in the holocaust after all but had survived the war and had even gone back to teaching in Berlin in the 1950s, where he died 5. May 1957 almost eighty years old. Thiel then did the same as Crossley, tracing back who had written up from whom and was able to show that Löwenheim’s death had already been assumed to have fallen during WWII, as he was still alive and kicking in Berlin in the early 1950s!

This episode convinced Thiel to set up his research project Case Studies into a Social History of Formal Logic in order, in the first instance to provide solid, verified biographical information on all of the logicians listed in Church’s bibliography of logic volume of the Journal of Symbolic Logic, which we then proceeded to do; a lot of very hard work in the pre-Internet age. Our project, however, was not confined to this biographical work, we also undertook other research into the history of formal logic.

As I said above this habit of writing ‘facts’ up from non-primary sources is unfortunately very widespread in #histSTM, particularly in popular books, which of course sell much better and are much more widely read than academic volumes, although academics are themselves not immune to this bad habit. This is, of course, the primary reason for the continued propagation of the myths of science that notoriously bring out the HISTSCI_HULK in yours truly. For example I’ve lost count of the number of times I’ve read that Galileo’s telescopic discoveries proved the truth of Copernicus’ heliocentric hypothesis. People are basically to lazy to do the legwork and check their claims and facts and are much too prepared to follow the maxim: if X said it and it’s in print, then it must be true!

[1]John N. Crossley, Old-fashioned versus newfangled: Reading and writing numbers, 1200–1500, Studies in medieval and Renaissance History, Vol. 10, 2013, pp.79–109

[2]Crossley p. 92 n. 42

[3]DFG = Deutsche Forschungsgemeinschaft = German Research Foundation

 

16 Comments

Filed under History of Logic, History of Mathematics, Myths of Science

Christmas Trilogy 2017 Part 2: Charles takes a trip to Turin

Charles Babbage wrote a sort of autobiography, Passages From The Life of a Philosopher.

One of its meandering chapters is devoted to his ideas about and work on his Analytical Engine. In one section he describes explaining to his friend the Irish physicist and mathematician James MacCullagh (1809–1847), who did important work in optics and was awarded the Royal Society’s Copley Medal in 1842,

James MacCullagh artist unknown
Source: Wikimedia Commons

how the Analytical Engine could be fed subroutines to evaluate trigonometrical or logarithmic functions, whilst working on algebraic operations. He goes on to explain that three or four days later Carl Gustav Jacob Jacobi (1804–1851) and Friedrich Wilhelm Bessel (1784–1846), two of Germany’s most important 19th century mathematicians, were visiting and discussing the Analytical Engine when MacCullagh returned and he completed his programming explanation. Which historian of 19th century mathematician wouldn’t give their eyeteeth to listen in on that conversation?

Having dealt with the problem of subroutines for the Analytical Engine Babbage moves on to another of his mathematical acquaintances, he tells us:

In 1840 I received from my friend M. Plana a letter pressing me strongly to visit Turin at the then approaching meeting of Italian Philosophers. In that letter M. Plana stated that he had inquired anxiously of many of my countrymen about the power and mechanism of the Analytical Engine.

Plana was Giovanni Antonio Amedeo Plana (1781–1864) mathematician and astronomer, a pupil of the great Joseph-Louis Lagrange (1736–1813), who was appointed to the chair of astronomy in Turin in 1811.

Giovanni Antonio Amedeo Plana
Source: Wikimedia Commons

Plana worked in many fields but was most famous for his work on the motions of the moon for which he was awarded the Copley Medal in 1834 and the Gold Medal of the Royal Astronomical Society in 1840. The meeting to which he had invited Babbage took place in the Turin Accademia delle Scienze. This august society was founded in 1757 by Count Angelo Saluzzo di Monesiglio, the physician Gianfrancesco Cigna and Joseph-Louis Lagrange as a private society. In 1759 it founded its own journal the Miscellanea philosophico mathematica Societatis privatae Taurinensis still in print today as the Memorie della Accademia delle Scienze. In 1783 having acquired an excellent international reputation it became the Reale Accademia delle Scienze, first as the Academy of Science of the Kingdom of Sardinia and later of the Kingdom of Italy. In 1874 it lost this status to the newly reconstituted Accademia dei Lincei in Rome. It still exists as a private academy today.

Rooms of the Turin Accademia delle Scienze

The meeting to which Babbage had been invited to explain his Analytical Engine was the second congress of Italian scientists. Babbage’s invitation in 1840 was thus recognition of his work at the highest international levels within the scientific community.

Babbage did not need to be asked twice, packed up his plans, drawings and descriptions of the Analytical Engine and accompanied by MacCullagh set of for Turin.

This was not just your usual conference sixty-minute lecture with time for questions. Babbage spent several days ensconced in his apartments in Turin with the elite of the Turin scientific and engineering community. Babbage writes, “M. Plana had at first planned to make notes, in order to write an outline of the principles of the engine. But his own laborious pursuits induced him to give up this plan, and to transfer this task to a younger friend of his, M. Menabrea, who had already established his reputation as a profound analyst.”

Luigi Federico Menabrea (1809–1896) studied at the University of Turin and was an engineer and mathematician. A professional soldier he was professor at both the military academy and at the university in Turin. Later in life he entered politics first as a diplomat and then later as a politician serving as a government minister. He served as prime minister of Italy from 1867 to 1869.

Luigi Federico Menabrea
Source: Wikimedia Commons

After another lengthy explanation of the programming of the Analytical Engine, Babbage writes:

It was during these meetings that my highly valued friend, M. Menabrea, [in reality Babbage had almost certainly never heard of Menabrea before he met him in Turin] collected the materials for that lucid and admirable description which he subsequently published in the Bibli. Uni. de Genève, t. xli. Oct. 1842.

 This is of course the famous document that Ada Lovelace would translate from the original French into English and annotate. Babbage writes of the two documents:

These two memoires taken together furnish, for those who are capable of understanding the reasoning, a complete demonstration—That the whole of the developments and operations of analysis are now capable of being executed by machinery. [emphasis in original]

That he was never able to realise his dreams of the Analytical Engine must have been very bitter for Babbage and now that we can execute the whole of the developments and operations of analysis with machinery, which even a Charles Babbage could not have envisaged in the 19th century, we should take a moment to consider just how extraordinary his vision of an Analytical Engine was.

4 Comments

Filed under History of Computing, History of Logic, History of Technology

Juggling information

One of the parlour games played by intellectuals and academic, as well as those who like to think of themselves as such, is which famous historical figures would you invite to a cocktail or dinner party and why. One premise for the game being, which historical figure or figures would you most like to meet and converse with. As a historian of mostly Early Modern science I am a bit wary of this question, as many of the people I study or have studied in depth have very unpleasant sides to their characters, as I have commented in the past in more than one blog post. However in my other guise, as a historian of formal or mathematical logic and the history of the computer there is actually one figure, who I would have been more than pleased to have met and that is the mathematician and engineer, Claude Shannon.

A young Claude Shannon
Source: Wikimedia Commons

For those who might not know who Claude Shannon was, he was a man who made two very major contributions to the development of the computers on which I am typing this post and on which you are reading it. The first was when he at the age of twenty-one, in what has been described as the most important master’s thesis written in the twentieth century, combined Boolean algebra with electric circuit design thus rationalising the whole process and simplifying the design of complex circuitry beyond measure. The second was sixteen years later when he in his A Mathematical Theory of Communication, building, it should be added, on the work of others, basically laid the foundations of our so-called information age. His work laid out how to transmit digital signals through circuitry without loss of information. He is regarded as the über-guru of information theory, to quote Wikipedia:

 Information theory studies the quantification, storage, and communication of information. It was originally proposed by Claude E. Shannon in 1948 to find fundamental limits on signal processing and communication operations such as data compression, in a landmark paper entitled “A Mathematical Theory of Communication”.

Given that the period we live in is called both the computer age and the information age, it is somewhat surprising that the first full-length biography of Shannon, A Mind at Play,[1] only appeared this year. Having somewhat foolishly said that I would hold a public lecture in November on Vannevar Bush, who was Shannon’s master’s thesis supervisor, and Shannon, I have been reading Soni’s and Goodman’s Shannon biography, which I have to say I enjoyed immensely.

 

This is a full length, full width biography that covers both the live of the human being as well as the intellectual achievements of the engineer-mathematician. Shannon couldn’t decide which to study as an undergraduate so he did a double BSc in both engineering and mathematics. This dual course of studies is what led to that extraordinary master’s thesis. Having studied Boolean algebra in his maths courses Shannon realised that he could apply it to rationalise and simplify electrical switching when working, as a postgrad, on the switching circuits for Bush’s analogue computer, the differential analyser. It’s one of those things that seems obvious with hindsight but required the right ‘prepared mind’, Shannon’s, to realise it in the first place. It is a mark of his character that he shrugged off any genius on his part in conceiving the idea, claiming that he had just been lucky.

Shannon’s other great contribution, his treatise on communication and information transmission, came out of his work at Bell Labs as a cryptanalyst during World War II. The analysis of language that he developed in order to break down codes led him to a more general consideration of the transmission of information with languages out of which he then laid down the foundations of his theories on communication and information.

Soni’s and Goodman’s and volume deals well with the algebraic calculus for circuit design and I came away with a much clearer picture of a subject about which I already knew quite a lot. However I found myself working really hard on their explanation of Shannon’s information theory but this is largely because it is not the easiest subject in the world to understand.

The rest of the book contains much of interest about the man and his work and I came away with the impression of a fascinating, very deep thinking, modest man who also possessed a, for me, very personable sense of humour. One aspect that appealed to me was that Shannon was a unicyclist and a juggler, who also loved toys, hence the title of my review. As I said at the beginning Claude Shannon is a man I would have liked to have met for a long chat over a cup of tea.

An elder Claude Shannon
Source: Wikimedia Commons

On the whole I found the biography well written and light to read, except for the technical details of Shannon information theory, and it contains a fairly large collection of black and white photos detailing all of Shannon’s life. As far as the notes are concerned we have the worst of all possible solutions, hanging endnotes; that is endnotes, with page numbers, to which there is no link or reference in the text. There is an extensive and comprehensive bibliography as well as a good index. This is a biography that I would whole-heartedly recommend to anybody who might be interested in the man or his area of work or both.

 

 

[1] Jimmy Soni & Rob Goodman, A Mind at Play: How Claude Shannon Invented the Information Age, Simon & Shuster, New York etc., 2017

2 Comments

Filed under Book Reviews, History of Computing, History of Logic, History of Technology

Men of Mathematics

This is something that I wrote this morning as a response on the History of Astronomy mailing list; having written it I have decided to cross post it here.

John Briggs is the second person in two days, who has recommended Eric Temple Bell’s “Men of Mathematics”. I can’t remember who the first one was, as I only registered it in passing, and it might not even have been on this particular mailing list. Immediately after John Briggs recommended it Rudi Lindner endorsed that recommendation. This series of recommendations has led me to say something about the role that book played in my own life and my view of it now.

“Men of Mathematics” was the first book on the history of science and/or mathematics that I ever read. I was deeply passionate fan of maths at school and my father gave me Bell’s book to read when I was sixteen years old. My other great passion was history and I had been reading history books since I taught myself to read at the age of three. Here was a book that magically combined my two great passions. I devoured it. Bell has a fluid narrative style and the book is easy to read and very stimulating.

Bell showed me that the calculus, that I had recently fallen in love with, had been invented/discovered (choose the verb that best fits your philosophy of maths), something I had never even considered before. Not only that but it was done independently by two of the greatest names in the history of science, Newton and Leibniz, and that this led to one of the most embittered priority and plagiarism disputes in intellectual history. He introduced me to George Boole, whom I had never heard of before and whose work and its reception in the 19th century I would seriously study many years later in a long-year research project into the history of formal or mathematical logic, my apprenticeship as a historian of science.

Bell’s tome ignited a burning passion for the history of mathematics in my soul, which rapidly developed into a passion for the whole of the history of science; a passion that is still burning brightly fifty years later. So would I join the chorus of those warmly recommending “Men of Mathematics”? No, actually I wouldn’t.

Why, if as I say Bell’s book played such a decisive role in my own development as a historian of mathematics/science, do I reject it now? Bell’s florid narrative writing style is very seductive but it is unfortunately also very misleading. Bell is always more than prepared to sacrifice truth and historical accuracy for a good story. The result is that his potted biographies are hagiographic, mythologizing and historically inaccurate, often to a painful degree. I spent a lot of time and effort unlearning a lot of what I had learnt from Bell. His is exactly the type of sloppy historiography against which I have taken up my crusade on my blog and in my public lectures in my later life. Sorry but, although it inspired me in my youth, I think Bell’s book should be laid to rest and not recommended to new generations.

 

23 Comments

Filed under Book Reviews, History of Logic, History of Mathematics, History of science, Myths of Science

A Lady Logician

Today George Boole is regarded as one of the founders of the computer age that now dominates our culture.

George Boole
Source: Wikimedia Commons

His algebra lies at the base of computer circuit design and of most computer programming languages and Booleans power the algorithms of the ubiquitous search engines. As a result two years ago the bicentenary of his birth was celebrated extensively and very publically. All of this would have been very hard to predict when his work on the algebra of logic first saw the light of day in the nineteenth century. His first publication Mathematical Analysis of Logic (1847) was largely ignored by the wider world of mathematics and his definitive presentation of his logic An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities fared little better, initially attracting very little attention. It was only some time after his death that Boole’s logical works began to attract deeper interest, most notably in Germany by Ernst Schröder and in America by Charles Sanders Peirce.

Charles Sanders Peirce
Source: Wikimedia Commons

In 1883 Peirce published Studies in Logic: by Members of the Johns Hopkins University, edited by himself it contained seven papers written largely by his students. Of central interest is the fact that it contains a doctoral thesis, On the Algebra of Logic, written by a women, Christine Ladd.

Christine Ladd’s life story is a casebook study of the prejudices that women, who wished to enter academia suffered in the nineteenth and early twentieth centuries. Born 1 December 1847 (the year Boole published his first logic book) in Windsor, Connecticut the daughter of Eliphalet and Augusta Ladd, she grew up in New York and Windsor. Her mother and her aunt Julie Niles brought her up to believe in education for women and women’s rights. Her mother died in 1860 but her father initially supported her wish for advanced education and enrolled her at Welshing academy in a two year course for preparing students for college; she graduated as valedictorian in 1865 but now her father opposed her wish to go on to college. Only by arguing that she was too ugly to get a husband was she able to persuade her father and grandmother to allow her to study at the women’s college Vassar. She entered Vassar in 1866 but was forced by financial difficulties to leave before completing her first year. She now became a schoolteacher until her aunt helped her to finance her studies and she returned to Vassar.

At Vassar the pioneering female astronomer Maria Mitchell took her under her wing and fostered her developing interest in physics and mathematics.

Due to the fact that women could not do experiment work in laboratories she was forced to choose mathematics[1] over physics, a decision that she regretted all of her life. She graduated from Vassar in 1869 and became a secondary school teacher of mathematics and science in Washington, Pennsylvania. Over the next nine years she published six items in The Analyst: A Journal of Pure and Applied Mathematics and three in the American Journal of Mathematics. More importantly she took a very active part in the mathematical questions column of the Educational Times, the journal of the College of Preceptors in London, a profession body for schoolteachers. This mathematical questions column was a very popular forum for nineteenth century mathematicians and logicians with many leading practitioners contribution both question and solutions. For example the nineteenth-century Scottish logician Hugh McColl published his first logical essays here and Bertrand Russell’s first mathematical publication can also be found here[2]. Ladd contributed a total of seventy-seven problem and solution to the Education Times, which would prove highly significant for her future career.

In 1878 she applied for and won a fellowship to study mathematics at the Johns Hopkins University. Her fellowship application was simply signed C. Ladd and the university had assumed that she was male. When they realised that she was in fact a woman, they withdrew their offer of a fellowship. However the English professor of mathematics at Johns Hopkins, James J. Sylvester, who knew of Ladd’s abilities from those Educational Times contribution insisted on the university honouring the fellowship offer.

James Joseph Sylvester
Source: Wikimedia Commons

At the time Johns Hopkins did not have a very good reputation but Sylvester did, in fact he was a mathematical star, not wishing to lose him the university conceded and allowed Ladd to take up her three-year scholarship. However her name was not allowed to be printed in circulars and basically the university denied her existence. At the beginning she was only allowed to attend Sylvester’s classes but as it became clear that she was an exceptional student she was allowed to attend classes by other professors.

In the year 1879 to 1880 she studied mathematics, logic and psychology under Charles Sanders Peirce becoming the first American women to be involved in psychology. Under Peirce’s supervision she wrote her doctoral thesis On the Algebra of Logic, which was then, as mentioned above, published in 1883. Although she had completed all the requirements of a doctoral degree Johns Hopkins University refused to award her a doctorate because she was a woman. They only finally did so forty-four years later in 1927, when she was already seventy-eight years old.

In 1882 she married fellow Johns Hopkins mathematician Fabian Franklin and became Christine Ladd-Franklin, the name by which she is universally known today. As a married woman she was barred from holding a paid position at an American university but she would lecture unpaid for five years on logic and psychology at Johns Hopkins and later at Columbia University for thirty years.

In the 1880s she developed an interest in vision and theories of colour perception publishing her first paper on the subject in 1887. She accompanied her husband on a research trip to Germany 1891-92 and used the opportunity to study with the psychologist Georg Elias Müller (1850–1934) in Göttingen

George Elias Muller
Source: Wikimedia Commons

and with the physiologist and physicist Hermann von Helmholtz (1821-1894) in Berlin.

Hermannvon Helmholtz in 1848
Source: Wikimedia Commons

In 1894 she returned alone to Germany to work with physicist Arthur König (1856–1901), with whom she did not get on and whom she accused of having stolen her ideas, and again in 1901 to work with Müller.

Portrait of Arthur Konig from Pokorny, J.
Source: Wikimedia Commons

As a result of her researches she developed and published her own theories of colour vision and the causes of colour blindness that were highly influential.

Ladd-Franklin was a tireless campaigner for women’s rights and even persuaded the inventor of the record player, Emile Berliner, to establish a fellowship for female professors, the Sarah Berliner postdoctoral endowment, in 1909, which she administered for the first ten years and which is still awarded annually.

Emile Berliner
Source: Wikimedia Commons

She herself continued to suffer rejection and humiliation as a female academic. In 1904 the British psychologist Edward Titchener (1867–1927) founded a society for experimental psychologists, “The Experimentalists”, and although he knew Ladd-Franklin well her barred her, as a woman, from membership. A decision, which she fought against in vain for many years. Women were only permitted to attend following Titchener’s death.

Edward Bradford Titchener
Source: Wikimedia Commons

Despite the discrimination that she suffered Christine Ladd-Franklin published many papers in the leading journals and her work was held in high regard. She died of pneumonia, aged 82, in 1930. Today the American Association for women in Psychology have an annual Christine-Ladd Franklin Award, awarded for significant and substantial contributions to the Association.

Christine Ladd-Franklin
(1847–1930)
Source: Wikimedia Commons

Although she struggled against prejudice and discrimination all of her life and never received the formal recognition that should have been her due, Christine Ladd-Franklin made significant contributions to the fields of Boolean algebra and colour vision for which she is highly regarded today. Through her fighting spirit and unbending will she helped open the doors of scientific research and academia for later generations of women.

 

 

[1] It is interesting to note that barred from access to academia and its institutions a small but significant number of women managed to some extent to break through the glass ceiling in logic and the mathematics in the nineteenth century, because these are subjects in which one can make an impression with nothing more than a pencil and a piece of paper.

[2] In my days as a logic historian I spent a not very pleasant two weeks in the British Newspaper Library in Colindale (the tenth circle of hell), amongst other things, going through the Educational Times looking for contributions on the algebra of logic. During this search I came across the Bertrand Russell contribution, which I showed, some time later, to a leading Russell scholar of my acquaintance, who shall remain here nameless. Imagine my surprise when shortly afterwards an article was published by said Russell expert explaining how he had discovered Russell’s first ever mathematical publication in the Mathematical Questions column of The Educational Times. He made no mention of the fact that it was actually I who had made the discovery.

19 Comments

Filed under History of Logic, History of Mathematics, History of science, Ladies of Science, Uncategorized

Bertrand Russell did not write Principia Mathematica

Yesterday would have been Bertrand Russell’s 144th birthday and numerous people on the Internet took notice of the occasion. Unfortunately several of them, including some who should know better, included in their brief descriptions of his life and work the fact that he was the author of Principia Mathematica, he wasn’t. At this point some readers will probably be thinking that I have gone mad. Anybody who has an interest in the history of modern mathematics and logic knows that Bertrand Russell wrote Principia Mathematica. Sorry, he didn’t! The three volumes of Principia Mathematica were co-authored by Alfred North Whitehead and Bertrand Russell.

3562

Now you might think that I’m just splitting hairs but I’m not. If you note the order in which the authors are named you will observe that they are not listed alphabetically but that Whitehead is listed first, ahead of Russell. This is because Whitehead being senior to Russell, in both years and status within the Cambridge academic hierarchy, was considered to be the lead author. In fact Whitehead had been both Russell’s teacher, as an undergraduate, and his examiner in his viva voce, where he in his own account gave Russell a hard time because he knew that it was the last time that he would be his mathematical superior.

Alfred North Whitehead

Alfred North Whitehead

Both of them were interested in metamathematics and had published books on the subject: Whitehead’s A Treatise on Universal Algebra (1898) and Russell’s The Principles of Mathematics (1903). Both of them were working on second volumes of their respective works when they decided to combine forces on a joint work the result of the decision being the monumental three volumes of Principia Mathematica (Vol. I, 1910, Vol. II, 1912, Vol. III, 1913). According to Russell’s own account the first two volumes where a true collaborative effort, whilst volume three was almost entirely written by Whitehead.

Bertrand Russell 1907 Source: Wikimedia Commons

Bertrand Russell 1907
Source: Wikimedia Commons

People referring to Russell’s Principia Mathematica instead of Whitehead’s and Russell’s Principia Mathematica is not new but I have the feeling that it is becoming more common as the years progress. This is not a good thing because it is a gradual blending out, at least on a semi-popular level, of Alfred Whitehead’s important contributions to the history of logic and metamathematics. I think this is partially due to the paths that their lives took after the publication of Principia Mathematica.

The title page of the shortened version of the Principia Mathematica to *56 Source: Wikimedia Commons

The title page of the shortened version of the Principia Mathematica to *56
Source: Wikimedia Commons

Whilst Russell, amongst his many other activities, remained very active at the centre of the European logic and metamathematics community, Whitehead turned, after the First World War, comparatively late in life, to philosophy and in particular metaphysics going on to found what has become known as process philosophy and which became particularly influential in the USA.

In history, as in academia in general, getting your facts right is one of the basics, so if you have occasion to refer to Principia Mathematica then please remember that it was written by Whitehead and Russell and not just by Russell and if you are talking about Bertrand Russell then he was co-author of Principia Mathematica and not its author.

15 Comments

Filed under History of Logic, History of Mathematics

Boole, Shannon and the Electronic Computer

Photo of George Boole by Samuel Prout Newcombe  Source: Wikimedia Commons

Photo of George Boole by Samuel Prout Newcombe
Source: Wikimedia Commons

In 1847, the self-taught English Mathematician George Boole (1815–1864), whose two hundredth birthday we celebrated last year, published a very small book, little more than a pamphlet, entitled Mathematical Analysis of Logic. This was the first modern book on symbolic or mathematical logic and contained Boole’s first efforts towards an algebraic logic of classes.

6882068-M

Although very ingenious and only the second published non-standard algebra, Hamilton’s Quaternions was the first, Boole’s work attracted very little attention outside of his close circle of friends. His friend, Augustus De Morgan, would falsely claim that his own Formal Logic Boole’s work were published on the same day, they were actually published several days apart, but their almost simultaneous appearance does signal a growing interest in formal logic in the early nineteenth century. Boole went on to publish a much improved and expanded version of his algebraic logic in his An Investigation of the Laws of Thought on Which are Founded the Mathematical Theories of Logic and Probabilities in 1854.

LoT-GB200-news-story-mag

The title contains an interesting aspect of Boole’s work in that it is an early example of structural mathematics. In structural mathematics, mathematicians set up formal axiomatic systems, which are capable of various interpretations and investigate the properties of the structure rather than any one specific interpretation, anything proved of the structure being valid for all interpretations. Structural mathematics lies at the heart of modern mathematics and its introduction is usually attributed to David Hilbert, but in his Laws of Thought, Boole anticipated Hilbert by half a century. The title of the book already mentions two interpretations of the axiomatic system contained within, logic and probability and the book actually contains more, in the first instance Boole’s system is a two valued logic of classes or as we would probably now call it a naïve set theory. Again despite its ingenuity the work was initially largely ignored till after Boole’s death ten years later.

As the nineteenth century progressed the interest in Boole’s algebraic logic grew and his system was modified and improved. Most importantly, Boole’s original logic contained no method of quantification, i.e. there was no simple way of expressing simply in symbols the statements, “there exists an X” or “for all X”, fundamental statements necessary for mathematical proofs. The first symbolic logic with quantification was Gottlob Frege’s, which first appeared in 1879. In the following years both Charles Saunders Peirce in America and Ernst Schröder in German introduced quantification into Boole’s algebraic logic. Both Peirce’s group at Johns Hopkins, which included Christine Ladd-Franklin or rather simply Christine Ladd as she was then, and Schröder produced substantial works of formal logic using Boole’s system. There is a popular misconception that Boole’s logic disappeared without major impact, to be replaced by the supposedly superior mathematical logic of Whitehead and Russell’s Principia Mathematica. This is not true. In fact Whitehead’s earlier pre-Principia work was carried out in Boolean algebra, as were the very important meta-logical works or both Löwenheim and Skolem. Alfred Tarski’s early work was also done in Bool’s algebra and not the logic of PM. PM first supplanted Boole with the publication of Hilbert’s and Ackermann’s Grundzüge der theoretischen Logik published in 1928.

It now seemed that Boole’s logic was destined for the rubbish bin of history, a short-lived curiosity, which was no longer relevant but that was to change radically in the next decade in the hands of an American mathematical prodigy, Claude Shannon who was born 30 April 1916.

Claude Shannon Photo by Konrad Jacobs Source: Wikimedia Commons (Konrad Jacobs was one of my maths teachers and a personal friend)

Claude Shannon
Photo by Konrad Jacobs
Source: Wikimedia Commons
(Konrad Jacobs was one of my maths teachers and a personal friend)

Shannon entered the University of Michigan in 1932 and graduated with a double bachelor’s degree in engineering and mathematics in 1936. Whilst at Michigan University he took a course in Boolean logic. He went on to MIT where under the supervision of Vannevar Bush he worked on Bush’s differential analyser, a mechanical analogue computer designed to solve differential equations. It was whilst he as working on the electrical circuitry for the differential analyser that Shannon realised that he could apply Boole’s algebraic logic to electrical circuit design, using the simple two valued logical functions as switching gates in the circuitry. This simple but brilliant insight became Shannon’s master’s thesis in 1937, when Shannon was just twenty-one years old. It was published as a paper, A Symbolic Analysis of Relay and Switching Circuits, in the Transactions of the American Institute of Electrical Engineers in 1938. Described by psychologist Howard Gardner as, “possibly the most important, and also most famous, master’s thesis of the century” this paper formed the basis of all future computer hardware design. Shannon had delivered the blueprint for what are now known as logic circuits and provided a new lease of life for Boole’s logical algebra.

17228_2

Later Shannon would go on to become on of the founders of information theory, which lies at the heart of the computer age and the Internet but it was that first insight combining Boolean logic with electrical circuit design that first made the computer age a viable prospect. Shannon would later play down the brilliance of his insight claiming that it was merely the product of his having access to both areas of knowledge, Boolean algebra and electrical engineering, and thus nothing special but it was seeing that the one could be interpreted as the other, which is anything but an obvious step that makes the young Shannon’s insight one of the greatest intellectual breakthroughs of the twentieth century.

11 Comments

Filed under History of Computing, History of Logic

Mega inanity

Since the lead up to the Turing centennial in 2012 celebrating the birth of one of the great meta-mathematicians of the twentieth century, Alan Mathison Turing, I have observed with increasing horror the escalating hagiographic accounts of Turing’s undoubted historical achievements and the resulting perversion of the histories of twentieth-century science, mathematics and technology and in particular the history of computing.

This abhorrence on my part is not based on a mere nodding acquaintance with Turing’s name but on a deep and long-time engagement with the man and his work. I served my apprenticeship as a historian of science over many years in a research project on the history of formal or mathematical logic. Formal logic is one of the so-called formal sciences the others being mathematics and informatics (or computer science). I have spent my whole life studying the history of mathematics with a special interest in the history of computing both in its abstract form and in its technological realisation in all sorts of calculating aids and machines. I also devoted a substantial part of my formal study of philosophy to the study of the philosophy of mathematics and the logical, meta-logical and meta-mathematical problems that this discipline, some would say unfortunately, generates. The history of all of these intellectual streams flow together in the first half of the twentieth century in the work of such people as Leopold Löwenheim, Thoralf Skolem, Emil Post, Alfred Tarski, Kurt Gödel, Alonso Church and Alan Turing amongst others. These people created a new discipline known as meta-mathematics whilst carrying out a programme delineated by David Hilbert.

Attempts to provide a solid foundation for mathematics using set theory and logic had run into serious problems with paradoxes. Hilbert thought the solution lay in developing each mathematical discipline as a strict axiomatic systems and then proving that each axiomatic system possessed a set of required characteristics thus ensuring the solidity and reliability of a given system. This concept of proving theories for complete axiomatic systems is the meta- of meta-mathematics. The properties that Hilbert required for his axiomatic systems were consistency, which means the systems should be shown to be free of contradictions, completeness, meaning that all of the theorems that belong to a particular discipline are deductible from its axiom system, and finally decidability, meaning that for any well-formed statement within the system it should be possible to produced an algorithmic process to decide if the statement is true within the axiomatic system or not. An algorithm is like a cookery recipe if you follow the steps correctly you will produce the right result.

The meta-mathematicians listed above showed by very ingenious methods that none of Hilbert’s aims could be fulfilled bringing the dream of a secure foundation for mathematics crashing to the ground. Turing’s solution to the problem of decidability is an ingenious thought experiment, for which he is justifiably regarded as one of the meta-mathematical gods of the twentieth century. It was this work that led to him being employed as a code breaker at Bletchley Park during WW II and eventually to the fame and disaster of the rest of his too short life.

Unfortunately the attempts to restore Turing’s reputation since the centenary of his birth in 2012 has led to some terrible misrepresentations of his work and its consequences. I thought we had reach a low point in the ebb and flow of the centenary celebrations but the release of “The Imitation Game”, the Alan Turing biopic, has produced a new series of false and inaccurate statements in the reviews. I was pleasantly pleased to see several reviews, which attempt to correct some of the worst historical errors in the film. You can read a collection of reviews of the film in the most recent edition of the weekly histories of science, technology and medicine links list Whewell’s Gazette. Not having seen the film yet I can’t comment but I was stunned when I read the following paragraph from the abc NEWS review of the film written by Alyssa Newcomb. It’s so bad you can only file it under; you can’t make this shit up.

The “Turing Machine” was the first modern computer to logically process information, running on interchangeable software and essentially laying the groundwork for every computing device we have today — from laptops to smartphones.

Before I analyse this train wreck of a historical statement I would just like to emphasise that this is not the Little Piddlington School Gazette, whose enthusiastic but slightly slapdash twelve-year-old film critic got his facts a little mixed up, but a review that appeared on the website of a major American media company and as such totally unacceptable however you view it.

The first compound statement contains a double whammy of mega-inane falsehood and I had real problems deciding where to begin and finally plumped for the “first modern computer to logically process information, running on interchangeable software”. Alan Turing had nothing to do with the first such machine, the honour going to Konrad Zuse’s Z3, which Zuse completed in 1941. The first such machine in whose design and construction Alan Turing was involved was the ACE produced at the National Physical Laboratory, in London, in 1949. In the intervening years Atanasoff and Berry, Tommy Flowers, Howard Aikin, as well as Eckert and Mauchly had all designed and constructed computers of various types and abilities. To credit Turing with the sole responsibility for our digital computer age is not only historically inaccurate but also highly insulting to all the others who made substantial and important contributions to the evolution of the computer. Many, many more than I’ve named here.

We now turn to the second error contained in this wonderfully inane opening statement and return to the subject of meta-mathematics. The “Turing Machine” is not a computer at all its Alan Turing’s truly genial thought experiment solution to Hilbert’s decidability problem. Turing imagined a very simple machine that consists of a scanning-reading head and an infinite tape that runs under the scanning head. The head can read instructions on the tape and execute them, moving the tape right or left or doing nothing. The question then reduces to the question, which set of instructions on the tape come eventually to a stop (decidable) and which lead to an infinite loop (undecidable). Turing developed this idea to a machine capable of computing any computable function (a universal Turing Machine) and thus created a theoretical model for all computers. This is of course a long way from a practical, real mechanical realisation i.e. a computer but it does provide a theoretical measure with which to describe the capabilities of a mechanical computing device. A computer that is the equivalent of a Universal Turing Machine is called Turing complete. For example, Zuse’s Z3 was Turing complete whereas Colossus, the computer designed and constructed by Tommy Flowers for decoding work at Bletchley Park, was not.

Turing’s work played and continues to play an important role in the theory of computation but historically had very little effect on the development of real computers. Attributing the digital computer age to Turing and his work is not just historically wrong but is as I already stated above highly insulting to all of those who really did bring about that age. Turing is a fascinating, brilliant, and because of what happened to him because of the persecution of homosexuals, tragic figure in the histories of mathematics, logic and computing in the twentieth century but attributing achievements to him that he didn’t make does not honour his memory, which certainly should be honoured, but ridicules it.

I should in fairness to the author of the film review, that I took as motivation from this post, say that she seems to be channelling misinformation from the film distributors as I’ve read very similar stupid claims in other previews and reviews of the film.

16 Comments

Filed under History of Computing, History of Logic, History of Mathematics, Myths of Science