Category Archives: Myths of Science

Someone is Wrong on the Internet.

Many of the readers of this blog will probably recognise the title of this post, as the punch line to one of the best ever xkcd cartoons. Regular readers will also know that the Renaissance Mathematicus cannot resist stamping on people who post inanely inaccurate or downright wrong history of science claims, comments etc. on the Internet. This last pre-Christmas post brings two examples of such foolishness that crossed our path in recent times.

The first concerns a problem that turns up time and again, not only on the Internet but also in many books. It is the inability of lots of people to comprehend that there cannot be a year nil, year zero or whatever they choose to call it. (Have patience dear reader the reason will be explained soon). Even worse are the reasons that such people, in their ignorance, dream up to explain the absence of the, in their opinion, missing numberless year. I stumbled across a particularly juicy example on the BBC’s History Extra website last Thursday, in a post entitled, 10 of the most surprising numbers in history. Actually the whole post really deserves a good kicking but for now I will content myself with the authors surprising number, AD 0…  the date that never was. The entry is very short so I’ve included the whole of it below:

The AD years of the Christian calendar are counted from the year of Jesus Christ’s birth, and, as the number zero was then unknown to the west, Dionysius began his new Christian era as AD 1, not AD 0. [my emphasis]

While it is now the consensus that Jesus was probably born between 7 and 3 BC, Dionysius’s new calendar is now the most widely used in the world, while AD 0 is one of the most interesting numbers never to have seen the light of day.

The first time I read this sparking pearl of historical wisdom I experienced one of those extremely painful ‘head-desk’ moments; recovering from my shock and managing at least a semblance of a laugh at this stunning piece of inanity I decided to give it the Histsci Hulk treatment.

Before I explain why there cannot be a year zero, let us look briefly at why Dionysius Exiguus, or Dennis the Short, started his count of the years with AD 1. Dennis, he of little stature, was not trying to create the calendar we use today in everyday lives but was making his contribution to the history of computos, the art of calculating the date of Easter. Due to the fact that the date of Easter is based on the Jewish Pesach (that’s Passover) feast, which in turn is based on a lunar calendar and also the fact that the lunar month and the solar year are incommensurable (you cannot measure the one with the other), these calculations are anything but easy. In fact they caused the Catholic Church much heartbreak and despair over the centuries from its beginnings right down to the Gregorian calendar reform in 1582. In the early centuries of Christianity the various solution usually involved producing a table of the dates of the occurrence of Easter over a predetermined cycle of years that then theoretically repeats from the beginning without too much inaccuracy. Dennis the vertically challenged produced just such a table.

In the time of our little Dennis there wasn’t a calendar with a continuous count of years. It was common practice to number the years according to the reign of the current monarch, emperor, despot or whatever. So for example the year that we know as 47 BCE would have been the third year of the reign of Gaius Julius Caesar. For formal purposes this dating system actually survived for a very long time. I recently came across a reference to a court case at the English Kings Bench Court in the eighteenth century as taking place on 12 July ‘4Geo.III’, that is the fourth year of the reign of George III. In Dennis the Small’s time the old Easter table, he hoped to replace, was dated according to the years of the reign of the Emperor Diocletian (245-311, reigned 284-305). Diocletian had distinguished himself by being particularly nasty to the Christians so our dwarf like hero decided to base his cycle on the 525 532 years “since the incarnation of our Lord Jesus Christ”; quite how he arrived at 525 532 years is not really known. AD short (being short, Dennis liked short things) for Anno Domini Nostri Iesu Christi (“In the Year of Our Lord Jesus Christ”). It was only later, starting with the Venerable Bede’s History of the Church (Historia Ecclesiastica) that Dennis’ innovation began to be used for general dating or calendrical purposes. The idea of BC years or dates only came into use in Early Modern period.

We now turn to the apparently thorny problem as to why there cannot be a year zero in a calendrical dating system. People’s wish or desire to find the missing year zero is based on a confusion in their minds between cardinal and ordinal numbers. (In what follows the terms cardinal and ordinal are used in their common linguistic sense and not the more formal sense of mathematical set theory). Cardinal numbers, one, two, three … and so on are used to count the number of objects in a collection. If, for example, your collection is the cookie jar there can be zero or nil cookies if the jar is, sadly, empty. Ordinal numbers list the positions of objects in an ordered collection, first, second, third … and so on. It requires only a modicum of thought to realise that there cannot be a zeroeth object, if it doesn’t exist it doesn’t have a position in the collection.

This distinction between cardinal and ordinal numbers becomes confused when we talk about historical years. We refer to the year five hundred CE when in fact we should be saying the five hundredth year CE, as it is an ordinal and not a cardinal. Remember our little friend Dennis’ AD, Anno Domini Nostri Iesu Christi (“In the Year of Our Lord Jesus Christ”)! We are enumerating the members of an ordered set not counting the number of objects in a collection. Because this is the case there cannot be a zeroeth year. End of discussion!

That this error, and particularly the harebrained explanation for the supposedly missing year zero, should occur on any history website is bad enough but that it occurs on a BBC website, an organisation that used to be world renowned for its informational reliability is unforgivable. I say used to be because I don’t think it’s true any longer. I would be interested in who is responsible for the history content of the BBC’s web presence as it varies between sloppy as here and totally crap as witnessed here and discussed here and here.

My second example is just as bad in terms of its source coming as it does from the Windows to the Universe website Brought to you by the National Earth Science Teachers Association. You would think that such an educational body would take the trouble to make sure that the historical information that they provide and disseminate is accurate and correct. If you thought that, you would be wrong, as is amply demonstrated by their post on Hellenistic astronomer, Ptolemy.

Ptolemy was a Greek astronomer who lived between 85-165 A.D. He put together his own ideas with those of Aristotle and Hipparchus and formed the geocentric theory. This theory states that the Earth was at the center of the universe and all other heavenly bodies circled it, a model which held for 1400 years until the time of Copernicus.

Ptolemy is also famous for his work in geography. He was the first person to use longitude and latitude lines to identify places on the face of the Earth.

We don’t actually know when Ptolemaeus (Ptolemy) lived, the usual way used to present his life is ‘fl. 150 CE’, where fl. means flourished. If you give dates for birth and death they should given as circa or c. To write them as above, 85–165 A.D. implies we know his exact dates of birth and death, we don’t! This is a trivial, but for historians, important point.

More important is the factual error in the second sentence: He … formed the geocentric theory. The geocentric theory had existed in Greek astronomy and cosmology for at least seven hundred years before Ptolemaeus wrote his Syntaxis Mathematiké (the Almagest). Ptolemaeus produced the most sophisticated mathematical model of the geocentric theory in antiquity but he didn’t form it. Those seven hundred years are not inconsequential (go back seven hundred years from now and you’ll be in 1314!) but represent seven hundred years of developments in cosmology and mathematical astronomy.

The last sentence contains an even worse error for teachers of the earth sciences. Ptolemaeus did indeed write a very important and highly influential geography book, his Geographike Hyphegesis. However he was not “the first person to use longitude and latitude lines”. We cannot be one hundred per cent who did in fact first use longitude and latitude lines but this innovation in cartography is usually attributed to a much earlier Alexandrian geographer, Eratosthenes, who lived about three hundred and fifty years before Ptolemaeus.

This is an example of truly terrible history of science brought to you by an organisation that says this about itself, “The National Earth Science Teachers Association is a nonprofit 501(c)(3) educational organization, founded in 1985, whose mission is to facilitate and advance excellence in Earth and Space Science education” [my emphasis]. I don’t know about you but my definition of excellence is somewhat other.



Filed under History of Astronomy, History of Cartography, History of science, Myths of Science

Mega inanity

Since the lead up to the Turing centennial in 2012 celebrating the birth of one of the great meta-mathematicians of the twentieth century, Alan Mathison Turing, I have observed with increasing horror the escalating hagiographic accounts of Turing’s undoubted historical achievements and the resulting perversion of the histories of twentieth-century science, mathematics and technology and in particular the history of computing.

This abhorrence on my part is not based on a mere nodding acquaintance with Turing’s name but on a deep and long-time engagement with the man and his work. I served my apprenticeship as a historian of science over many years in a research project on the history of formal or mathematical logic. Formal logic is one of the so-called formal sciences the others being mathematics and informatics (or computer science). I have spent my whole life studying the history of mathematics with a special interest in the history of computing both in its abstract form and in its technological realisation in all sorts of calculating aids and machines. I also devoted a substantial part of my formal study of philosophy to the study of the philosophy of mathematics and the logical, meta-logical and meta-mathematical problems that this discipline, some would say unfortunately, generates. The history of all of these intellectual streams flow together in the first half of the twentieth century in the work of such people as Leopold Löwenheim, Thoralf Skolem, Emil Post, Alfred Tarski, Kurt Gödel, Alonso Church and Alan Turing amongst others. These people created a new discipline known as meta-mathematics whilst carrying out a programme delineated by David Hilbert.

Attempts to provide a solid foundation for mathematics using set theory and logic had run into serious problems with paradoxes. Hilbert thought the solution lay in developing each mathematical discipline as a strict axiomatic systems and then proving that each axiomatic system possessed a set of required characteristics thus ensuring the solidity and reliability of a given system. This concept of proving theories for complete axiomatic systems is the meta- of meta-mathematics. The properties that Hilbert required for his axiomatic systems were consistency, which means the systems should be shown to be free of contradictions, completeness, meaning that all of the theorems that belong to a particular discipline are deductible from its axiom system, and finally decidability, meaning that for any well-formed statement within the system it should be possible to produced an algorithmic process to decide if the statement is true within the axiomatic system or not. An algorithm is like a cookery recipe if you follow the steps correctly you will produce the right result.

The meta-mathematicians listed above showed by very ingenious methods that none of Hilbert’s aims could be fulfilled bringing the dream of a secure foundation for mathematics crashing to the ground. Turing’s solution to the problem of decidability is an ingenious thought experiment, for which he is justifiably regarded as one of the meta-mathematical gods of the twentieth century. It was this work that led to him being employed as a code breaker at Bletchley Park during WW II and eventually to the fame and disaster of the rest of his too short life.

Unfortunately the attempts to restore Turing’s reputation since the centenary of his birth in 2012 has led to some terrible misrepresentations of his work and its consequences. I thought we had reach a low point in the ebb and flow of the centenary celebrations but the release of “The Imitation Game”, the Alan Turing biopic, has produced a new series of false and inaccurate statements in the reviews. I was pleasantly pleased to see several reviews, which attempt to correct some of the worst historical errors in the film. You can read a collection of reviews of the film in the most recent edition of the weekly histories of science, technology and medicine links list Whewell’s Gazette. Not having seen the film yet I can’t comment but I was stunned when I read the following paragraph from the abc NEWS review of the film written by Alyssa Newcomb. It’s so bad you can only file it under; you can’t make this shit up.

The “Turing Machine” was the first modern computer to logically process information, running on interchangeable software and essentially laying the groundwork for every computing device we have today — from laptops to smartphones.

Before I analyse this train wreck of a historical statement I would just like to emphasise that this is not the Little Piddlington School Gazette, whose enthusiastic but slightly slapdash twelve-year-old film critic got his facts a little mixed up, but a review that appeared on the website of a major American media company and as such totally unacceptable however you view it.

The first compound statement contains a double whammy of mega-inane falsehood and I had real problems deciding where to begin and finally plumped for the “first modern computer to logically process information, running on interchangeable software”. Alan Turing had nothing to do with the first such machine, the honour going to Konrad Zuse’s Z3, which Zuse completed in 1941. The first such machine in whose design and construction Alan Turing was involved was the ACE produced at the National Physical Laboratory, in London, in 1949. In the intervening years Atanasoff and Berry, Tommy Flowers, Howard Aikin, as well as Eckert and Mauchly had all designed and constructed computers of various types and abilities. To credit Turing with the sole responsibility for our digital computer age is not only historically inaccurate but also highly insulting to all the others who made substantial and important contributions to the evolution of the computer. Many, many more than I’ve named here.

We now turn to the second error contained in this wonderfully inane opening statement and return to the subject of meta-mathematics. The “Turing Machine” is not a computer at all its Alan Turing’s truly genial thought experiment solution to Hilbert’s decidability problem. Turing imagined a very simple machine that consists of a scanning-reading head and an infinite tape that runs under the scanning head. The head can read instructions on the tape and execute them, moving the tape right or left or doing nothing. The question then reduces to the question, which set of instructions on the tape come eventually to a stop (decidable) and which lead to an infinite loop (undecidable). Turing developed this idea to a machine capable of computing any computable function (a universal Turing Machine) and thus created a theoretical model for all computers. This is of course a long way from a practical, real mechanical realisation i.e. a computer but it does provide a theoretical measure with which to describe the capabilities of a mechanical computing device. A computer that is the equivalent of a Universal Turing Machine is called Turing complete. For example, Zuse’s Z3 was Turing complete whereas Colossus, the computer designed and constructed by Tommy Flowers for decoding work at Bletchley Park, was not.

Turing’s work played and continues to play an important role in the theory of computation but historically had very little effect on the development of real computers. Attributing the digital computer age to Turing and his work is not just historically wrong but is as I already stated above highly insulting to all of those who really did bring about that age. Turing is a fascinating, brilliant, and because of what happened to him because of the persecution of homosexuals, tragic figure in the histories of mathematics, logic and computing in the twentieth century but attributing achievements to him that he didn’t make does not honour his memory, which certainly should be honoured, but ridicules it.

I should in fairness to the author of the film review, that I took as motivation from this post, say that she seems to be channelling misinformation from the film distributors as I’ve read very similar stupid claims in other previews and reviews of the film.


Filed under History of Computing, History of Logic, History of Mathematics, Myths of Science

Having lots of letters after your name doesn’t protect you from spouting rubbish

The eloquently excellent Elegant Fowl (aka Pete Langman @elegantfowl) just drew my attention to a piece of high-grade seventeenth-century history of science rubbish on the website of my favourite newspaper The Guardian. In the books section a certain Ian Mortimer has an article entitled The 10 greatest changes of the past 1,000 years. I must to my shame admit that I’d never heard of Ian Mortimer and had no idea who he is. However I quick trip to Wikipedia informed that I have to do with Dr Ian James Forrester Mortimer (BA, PhD, DLitt, Exeter MA, UCL) and author of an impressive list of books and that the article on the Guardian website is a promotion exercise for his latest tome Centuries of Change. Apparent collecting lots of letter after your name and being a hyper prolific scribbler doesn’t prevent you from spouting rubbish when it comes writing about the history of science. Shall we take a peek at what the highly eminent Mr Mortimer has to say about the seventeenth-century that attracted the attention of the Elegant Fowl and have now provoked the ire of the Renaissance Mathematicus.

17th century: The scientific revolution

One thing that few people fully appreciate about the witchcraft craze that swept Europe in the late 16th and early 17th centuries is that it was not just a superstition. If someone you did not like died, and you were accused of their murder by witchcraft, it would have been of no use claiming that witchcraft does not exist, or that you did not believe in it. Witchcraft was recognised as existing in law – and to a greater or lesser extent, so were many superstitions. The 17th century saw many of these replaced by scientific theories. The old idea that the sun revolved around the Earth was finally disproved by Galileo. People facing life-threatening illnesses, who in 1600 had simply prayed to God for health, now chose to see a doctor. But the most important thing is that there was a widespread confidence in science. Only a handful of people could possibly have understood books such as Isaac Newton’s Philosophiae Naturalis Principia Mathematica, when it was published in 1687. But by 1700 people had a confidence that the foremost scientists did understand the world, even if they themselves did not, and that it was unnecessary to resort to superstitions to explain seemingly mysterious things.

Regular readers of this blog will be aware that I’m a gradualist and don’t actually believe in the scientific revolution but for the purposes of this post we will just assume that there was a scientific revolution and that it did take place in the seventeenth century, although most of those who do believe in it think it started in the middle of the sixteenth-century.

I find it mildly bizarre to devote nearly half of this paragraph to a rather primitive description of the witchcraft craze and to suggest that the scientific revolution did away with belief in witchcraft, given that several prominent propagators of the new science wrote extensively defending the existence of witches. I recommend Joseph Glanvill’s Saducismus triumphatus (1681) and Philosophical Considerations Touching the Being of Witches and Witchcraft (1666). Apart from witchcraft I can’t think of any superstition that was replaced by a scientific theory in the seventeenth-century. However it is the next brief sentence that cries out for my attention.

The old idea that the sun revolved around the Earth was finally disproved by Galileo.

By a strange coincidence I spent yesterday evening listening to a lecture by one of Germany’s leading historians of astronomy, Dr Jürgen Hamel (who has written almost as many books as Ian Mortimer) on why it was perfectly reasonable to reject the heliocentric theory of Copernicus in the first hundred years or more after it was published. He of course also explained that Galileo did not succeed in either disproving geocentricity or proving heliocentricity. Now anybody who has regularly visited this blog will know that I have already written quite a lot on this topic and I don’t intend to repeat myself here but I recommend my on going series on the transition to heliocentricity (the next instalment is in the pipeline) in particular the post on the Sidereus Nuncius and the one on the Phases of Venus. Put very, very simply for those who have not been listening: GALILEO DID NOT DISPROVE THE OLD IDEA THAT THE SUN REVOLVED AROUND THE EARTH. I apologise for shouting but sometimes I just can’t help myself.

Quite frankly I find the next sentence totally mindboggling:

People facing life-threatening illnesses, who in 1600 had simply prayed to God for health, now chose to see a doctor.

Even more baffling, it appears that Ian Mortimer has written prize-winning essay defending this thesis, “The Triumph of the Doctors” was awarded the 2004 Alexander Prize by the Royal Historical Society. In this essay he demonstrated that ill and injured people close to death shifted their hopes of physical salvation from an exclusively religious source of healing power (God, or Christ) to a predominantly human one (physicians and surgeons) over the period 1615–70, and argued that this shift of outlook was among the most profound changes western society has ever experienced. (Wikipedia) I haven’t read this masterpiece but colour me extremely sceptical.

We close out with a generalisation that simply doesn’t hold water:

[…] by 1700 people had a confidence that the foremost scientists did understand the world, even if they themselves did not, and that it was unnecessary to resort to superstitions to explain seemingly mysterious things.

They did? I really don’t think so. By 1700 hundred the number of people who had “confidence that the foremost scientists did understand the world” was with certainty so minimal that one would have a great deal of difficulty expressing it as a percentage.

Mortimer’s handful of sentences on the 17th century and the scientific revolution has to be amongst the worst paragraphs on the evolution of science in this period that I have ever read.


Filed under History of Astronomy, History of medicine, History of science, Myths of Science

Little things matter – for want of a semicolon.

The Prof is back. A couple of years back Professor Christopher M. Graney, known to his friends as Chris, wrote a highly informative guest post for The Renaissance Mathematicus defending the honour of Tyco Brahe against his ignorant modern critics. In the mean time The Renaissance Mathematics was able to lure him into coming all the way to Middle Franconia, from the depths of Kentucky, to entertain the locals with a couple of lectures on Early Modern telescope images, Airy discs and how this all applies to Galileo Galilei’s and Simon Marius’ interpretations of the stars that they saw through their telescopes in 1609-10, stirring stuff I can tell you. You can read all about it in his forthcoming book, Setting Aside All Authority: Giovanni Battista Riccioli and the Science against Copernicus in the Age of Galileo (forthcoming March 2015). While he was here he made some videos of The Renaissance Mathematicus waving his arms about and scratching his fleas that you can view on Youtube, if that sort of thing turns you on. In exchange for this act of personal humiliation The Renaissance Mathematics demanded that he provide the readers of this blog with a new guest post and here it is. This time The Prof explains why it is important when during historical research to actually look at the original documents and not to rely on secondary sources. 


You have probably heard the expression “Don’t sweat the small stuff.” Sometimes the small stuff matters. Consider one of the more infamous statements from the history of science: the one, made on 24 February 1616 by a team of consultants for the Roman Inquisition, which declared the Copernican theory to be —


foolish and absurd in philosophy and formally heretical, because it expressly contradicts the doctrine of the Holy Scripture in many passages


— unless, that is, it was —


philosophically and scientifically untenable; and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture.


The first quote is from the noted scholar Albert Van Helden in the book Planetary Astronomy from the Renaissance to the Rise of Astrophysics, published by Cambridge University Press in 1989. That is certainly a first-rate source. The second is, more or less, from Maurice Finocchiaro, another very accomplished scholar, in his book The Galileo Affair: A Documentary History, published by the University of California Press, also in 1989. It is also a first-rate source.


I say, “more or less,” because Finocchiaro actually gives the translation as —


foolish and absurd in philosophy, and formally heretical since it explicitly contradicts in many places the sense of Holy Scripture.


But elsewhere in the book he substitutes “philosophically and scientifically untenable” for “foolish and absurd in philosophy” — “philosophy” in the seventeenth century included that which we would call “science” today. And still elsewhere he notes that the original document in the Vatican, in Latin, has a semicolon after the word “philosophia.”


Is Finocchiaro correct? After all, Van Helden’s translation conveys the impression that biblical contradiction is being given as a reason for ascribing both philosophical-scientific falsehood and theological heresy. But Finocchiaro’s translation conveys a different impression: that biblical contradiction is being given as a reason for ascribing theological heresy to a philosophically-scientifically false theory (I’m borrowing Finocchiaro’s phrasing here). I would say Van Helden’s translation, not Finocchiaro’s, is what people usually think of when they think of the infamous condemnation. But Finocchiaro’s made sense to me, based on my reading of anti-Copernican writers from that time.


I wanted to know if Finocchiaro is correct. But looking at sources that give the “original Latin” provided no answers. A review of different sources revealed a remarkable variety of punctuations. A few nineteenth-century sources show Finocchiaro’s semicolon after “philosophia.” One of these is Galileo Galilei und die romische Curie by Karl von Gebler, published in Stuttgart in 1877. Yet Galileo Galilei and the Roman Curia, by Karl von Gebler, published in London in 1879, shows no semicolon. Two editions of I documenti del processo di Galileo Galilei, edited by S. M. Pagano and published in Vatican City in 1984 and 2009, both disagree with Finocchiaro. That might seem to settle the matter — Finocchiaro must be wrong, since the Vatican would know what its documents say — except that the two editions also disagree with each other. The 1984 edition has no punctuation after “philosophia” (note Van Helden’s translation); the 2009 edition has a comma.


I contacted Finocchiaro. Was he certain about the semicolon? Yes — he had seen it himself. Did he have a copy of the original 1616 document? No.


I could find no published image of the original. That left one option: get a copy from the Vatican. How does one get a copy of an important historical document stored in the Vatican Secret Archives? Send the VSA an e-mail. For less than the cost of a cheap pizza, I had a super-high-resolution image of the infamous 24 February 1616 document condemning the Copernican system.


High-resolution images of this document are available here, on page 17-19.

And yes, Finocchiaro is correct! But follow the link above to the high-resolution image, and you will find that it is understandable that the semicolon could be overlooked when casually studying the document. I had expected the document to be a bumptious masterpiece of calligraphy, with an imposing appearance of formality suitable for an Important Proclamation. In fact, it appears much like hastily scrawled meeting minutes. The writer of the document often dots his “i” letters well to the right of the letters themselves. When these fall over commas, they give the appearance of semicolons where none exist. Furthermore, the real semicolon after “philosophia” has a very elongated dot. But, study the chicken-scratch handwriting more closely, and it is clear that “philosophia” is followed by a real semicolon.

If you think it not so clear, there is a second reason to be sure that the “philosophia” semicolon is indeed a semicolon. Here is the original Latin, taken from the document, with my translation (I kept as close as possible to the original):

Sol est centrum mundi, et omnino immobilis motu locali. The sun is the center of the world, and entirely immobile insofar as location movement [i.e. movement from place to place; no comment here on rotation movement].
Censura: Omnes dixerunt dictam propositionem esse stultam et absurdam in Philosophia; et formaliter haereticam, quatenus contradicit expresse sententiis sacrae scripturae in multis locis, secundum proprietatem verborum, et secundum communem expositionem, et sensum, Sanctorum Patrum et Theologorum doctorum. Appraisal: All have said the stated proposition to be foolish and absurd in Philosophy; and formally heretical, since it expressly contradicts the sense of sacred scripture in many places, according to the quality of the words, and according to the common exposition, and understanding, of the Holy Fathers and the learned Theologians.
Terra non est centrum mundi, nec immobilis, sed secundum se Totam, movetur, etiam motu diurno. The earth is not the center of the world, and not immobile, but is moved along Whole itself, and also by diurnal motion.
Censura: Omnes dixerunt, hanc propositionem recipere eandem censuram in Philosophia; et spectando veritatem Theologicam, adminus esse in fide erroneam. Appraisal: All have said, this proposition to receive the same appraisal in Philosophy; and regarding Theological truth, at least to be erroneous in faith.

Note the parallel structure used here. There is a statement, and then an assessment of the statement; a second statement, and then an assessment of that statement. Each assessment first has a comment regarding philosophy, and then a comment regarding religion. The second assessment statement clearly has a semicolon after “philosophia” and before “et spectando” (plenty of secondary sources show this second semicolon). Parallel structure suggests that there should also be a semicolon in the first assessment statement, after “philosophia” and before “et formaliter.”

Now, two questions.

The first question is why secondary sources have almost always gotten the punctuation wrong. I will provide a speculative answer to this.

The consultants’ statement was issued as the Inquisition investigated a complaint filed against Galileo in 1615. Galileo had been exonerated, but the Inquisition decided to consult its experts for an opinion on the status of Copernicanism. Despite the consultants’ statement, the Inquisition issued no formal condemnation of the Copernican system. (However, the Congregation of the Index, the arm of the Vatican in charge of book censorship, issued a decree on 5 March 1616 declaring the Copernican system to be “false” and “altogether contrary to the Holy Scripture,” and censoring books that presented the Copernican system as being more than a hypothesis.) The consultants’ statement was filed away in the Inquisition archives. Two decades later, a paraphrase of the statement was made public. This was because, following the trial of Galileo, copies of the 22 June 1633 sentence against him were sent to papal nuncios and to inquisitors around Europe. The sentence, which was written in Italian rather than Latin, noted the opinion of the consultant team and included a paraphrase of their statement from 1616. Still later, Giovanni Battista Riccioli included in his 1651 Almagestum Novum a Latin translation of Galileo’s sentence. Riccioli’s translation was widely referenced for centuries, and it reads as though biblical contradiction is the reason for ascribing both philosophical-scientific falsehood and theological heresy. But it was a Latin translation of an Italian paraphrase of a Latin original. Translations into modern languages of Riccioli’s Latin version simply added a fourth layer of translation.

The original statement itself was not published until the middle of the nineteenth century. Now to speculate: I imagine that at that time scholars were both used to the Riccioli version and sure that science was firmly on the side of Copernicus. The original statement, with its semicolon, assesses first that the proposition is philosophically-scientifically untenable, and then that it is formally heretical since it contradicts Scripture. Indeed, I have found that in Latin from this time semicolons are often used much as we use periods, so it would not be completely out of line to render the consultants’ statement as —

[The Copernican theory is] philosophically and scientifically untenable. It is also formally heretical since it explicitly contradicts in many places the sense of Holy Scripture.

This makes little sense under the assumption that the Copernican system had the weight of scientific evidence behind it. I imagine this to be the reason why the statement has consistently been presented with altered punctuation — so that it reads in a manner that conforms to what modern readers believe to have been the case. If we know science was on the side of Copernicus, then the consultants must be saying that Copernicanism is untenable because it contradicts scripture. The chicken-scratch handwriting makes it easy to overlook the semicolon.

Today it is clear that in February 1616 science was not so firmly on the side of Copernicus. As Dennis Danielson and I discussed in the January issue of Scientific American (the article is available in French in Pour la Science and in German in Spektrum der Wissenschaft), and as I have written in a previous guest blog for the Renaissance Mathematicus, Tycho Brahe had formulated a potent anti-Copernican scientific argument. The argument was based on the fact that the Copernican theory seemed to imply that every star in a heliocentric universe, even the smallest, would be vastly larger than the sun. By contrast, Tycho found that in a geocentric universe the stars would have sizes consistent with the sun and larger planets. Moreover, Copernicans responded to this argument by appealing to God’s Power, saying that an infinite Creator could make giant stars. Tycho had said in print that all this was “absurd.” Indeed, most scientists today would probably classify as absurd a theory that creates a new class of giant bodies, and chalks them up to the power of God. This star size problem was definitely “in play” immediately prior to the 1616 condemnation. Simon Marius mentions it in his 1614 Mundus Jovialis. Georg Locher cites it as one of the main reasons to reject Copernicanism in his 1614 Disquisitiones Mathematicae. And Monsignor Francesco Ingoli brings it up in an essay he wrote to Galileo just prior to the condemnation (Galileo believed Ingoli to be influential in the rejection of the Copernican theory). No, these writers did not reject telescopic discoveries. They simply endorsed the Tychonic geocentric theory, which was compatible with those discoveries. Marius, for example, cites telescopic observations of the sizes of stars as supporting a Tychonic universe. Locher illustrates telescopic discoveries like the Jovian system and the phases of Venus, and endorses the Tychonic theory.

In light of this, the statement that the Copernican theory was “foolish and absurd in philosophy” (“philosophically and scientifically untenable”) makes a little more sense on its own. It essentially echoes Tycho Brahe, the most prominent astronomer of that time.

The second question is why, even granted all this, anyone should really care about a semicolon. Yes, readers of the Renaissance Mathematicus care because they love history of astronomy. Why should anyone else care? This is an important question. Indeed, in September I was in Germany, talking quite a bit with the Mathematicus, and in one conversation he mentioned how academic historians of science that he knows are facing real pressure at their institutions to justify their existence. Because, well, why should anyone care?

Here is the answer to that: In the United States, at least, science is increasingly burdened by the problem of “science deniers.” This was brought home to me yet again this semester. I was giving my students an assignment to make a video illustrating the phases of the moon and Venus by means of a ball and a light source. I went to YouTube to find an example of such a video, and quickly discovered that a “Bill Nye the Science Guy” video on moon phases will be accompanied by several links to videos demanding that NASA reveal the “truth” about the Apollo landings, as seen in this example:


No wonder so many of my students and so many of our visitors at my college’s observatory ask about whether the Apollo landings actually took place!

Whether they be the “Apollo deniers” I found on YouTube, or “9-11 Truthers,” or “vaccine deniers,” or those who assert science to support the universe being 6000 years old, all such deniers build their claims on the premise that in science, powerful forces conspire to cover up scientific truths. Science deniers see themselves as brave Copernicans, standing against the power of an Inquisition that is determined to hide scientific truth because it contradicts some Holy Writ.

The story of the Inquisition’s semicolon undermines an important narrative for science denial — the narrative that, at the beginning of the history of modern science, powerful forces indeed did conspire to suppress a scientific idea, declaring it to be “foolish and absurd” only because it was religiously inconvenient. Thus the semicolon story should undermine the entire idea of conspiracy and cover-up that is behind the science denial phenomenon. That’s a reason to care, a reason why we need good history of science, and a reason why some times we need to sweat the small stuff.

For a more academic treatment of this subject, with full references, images of different secondary sources and their different punctuations, etc., see “The Inquisition’s Semicolon: Punctuation, Translation, and Science in the 1616 Condemnation of the Copernican System.” An article on this work is also available on


Filed under History of Astronomy, Myths of Science, Renaissance Science

The unfortunate backlash in the historiography of Islamic science

Anybody with a basic knowledge of the history of Western science will know that there is a standard narrative of its development that goes something like this. Its roots are firmly planted in the cultures of ancient Egypt and Babylon and it bloomed for the first time in ancient Greece, reaching a peak in the work of Ptolemaeus in astronomy and Galen in medicine in the second-century CE. It then goes into decline along with the Roman Empire effectively disappearing from Europe by the fifth-century CE. It began to re-emerge in the Islamic Empire[1] in the eight-century CE from whence it was brought back into Europe beginning in the twelfth-century CE. In Europe it began to bloom again in the Renaissance transforming into modern science in the so-called Scientific Revolution in the seventeenth-century. There is much that is questionable in this broad narrative but that is not the subject of this post.

In earlier versions of this narrative, its European propagators claimed that the Islamic scholars who appropriated Greek knowledge in the eighth-century and then passed it back to their European successors, beginning in the twelfth-century, only conserved that knowledge, effectively doing nothing with it and not increasing it. For these narrators their heroes of science were either ancient Greeks or Early Modern Europeans; Islamic scholars definitely did not belong to the pantheon. However, a later generation of historians of science began to research the work of those Islamic scholars, reading, transcribing, translating and analysing their work and showing that they had in fact made substantial contributions to many areas of science and mathematics, contributions that had flowed into modern European science along with the earlier Greek, Babylonian and Egyptian contributions. Also Islamic scholars such as al-Biruni, al-Kindi, al-Haytham, Ibn Sina, al-Khwarizmi and many others were on a level with such heroes of science as Archimedes, Ptolemaeus, Galen or Kepler, Galileo and Newton. Although this work redressed the balance there is still much work to be done on the breadth and deep of Islamic science.

Unfortunately the hagiographic, amateur, wannabe pop historians of science now entered the field keen to atone for the sins of the earlier Eurocentric historical narrative and began to exaggerate the achievements of the Islamic scholars to show how superior they were to the puny Europeans who stole their ideas, like the colonial bullies who stole their lands. There came into being a type of hagiographical popular history of Islamic science that owes more to the Thousand and One Nights than it does to any form of serious historical scholarship. I came across an example of this last week during the Gravity Fields Festival, an annual shindig put on in Grantham to celebrate the life and work of one Isaac Newton, late of that parish.

On Twitter Ammār ibn Aziz Ahmed (@Ammar_Ibn_AA) tweeted the following:

I’m sorry to let you know that Isaac Newton learned about gravity from the books of Ibn al-Haytham

I naturally responded in my usual graceless style that this statement was total rubbish to which Ammār ibn Aziz Ahmed responded with a link to his ‘source

I answered this time somewhat more moderately that a very large part of that article is quite simply wrong. One of my Internet friends, a maths librarian (@MathsBooks) told me I was being unfair and that I should explain what was wrong with his source, so here I am.

The article in question is one of many potted biographies of al-Haytham that you can find dotted all other the Internet and which are mostly virtual clones of each other. They all contain the same collection of legends, half-truths, myths and straightforward lies usually without sources, or, as in this case, quoting bad popular books written by a non-historian as their source. It is fairly obvious that they all plagiarise each other without bothering to consult original sources or the work done by real historian of science on the life and work of al-Haytham.

The biography of al-Haytham is, like that of most medieval Islamic scholars, badly documented and very patchy at best. Like most popular accounts this article starts with the legend of al-Haytham’s feigned madness and ten-year incarceration. This legend is not mentioned in all the biographical sources and should be viewed with extreme scepticism by anybody seriously interested in the man and his work. The article then moves on to the most pernicious modern myth concerning al-Haytham that he was the ‘first real scientist’.

This claim is based on a misrepresentation of what al-Haytham did. He did not as the article claims introduce the scientific method, whatever that might be. For a limited part of his work al-Haytham used experiments to prove points, for the majority of it he reasoned in exactly the same way as the Greek philosophers whose heir he was. Even where he used the experimental method he was doing nothing that could not be found in the work of Archimedes or Ptolemaeus. There is also an interesting discussion outlined in Peter Dear’s Discipline and Experience (1995) as to whether al-Haytham used or understood experiments in the same ways as researchers in the seventeenth-century; Dear concludes that he doesn’t. (pp. 51-53) It is, however, interesting to sketch how this ‘misunderstanding’ came about.

The original narrative of the development of Western science not only denied the contribution of the Islamic Empire but also claimed that the Middle Ages totally rejected science, modern science only emerging after the Renaissance had reclaimed the Greek scientific inheritance. The nineteenth-century French physicist and historian of science, Pierre Duhem, was the first to challenge this fairy tale claiming instead, based on his own researches, that the Scientific Revolution didn’t take place in the seventeenth–century but in the High Middle Ages, “the mechanics and physics of which modern times are justifiably proud to proceed, by an uninterrupted series of scarcely perceptible improvements, from doctrines professed in the heart of the medieval schools.” After the Second World War Duhem’s thesis was modernised by the Australian historian of science, Alistair C. Crombie, whose studies on medieval science in general and Robert Grosseteste in particular set a new high water mark in the history of science. Crombie attributed the origins of modern science and the scientific method to Grosseteste and Roger Bacon in the twelfth and thirteenth-centuries. A view that has been somewhat modified and watered down by more recent historians, such as David Lindberg. Enter Matthias Schramm.

Matthias Schramm was a German historian of science who wrote his doctoral thesis on al-Haytham. A fan of Crombie’s work Schramm argued that the principle scientific work of Grosseteste and Bacon in physical optics was based on the work of al-Haytham, correct for Bacon not so for Grosseteste, and so he should be viewed as the originator of the scientific method and not they. He makes this claim in the introduction to his Ibn al-Haythams Weg zur Physik (1964), but doesn’t really substantiate it in the book itself. (And yes, I have read it!) Al-Haytham’s use of experiment is very limited and to credit him with being the inventor of the scientific method is a step too far. However since Schramm made his claims they have been expanded, exaggerated and repeated ad nauseam by the al-Haytham hagiographers.

We now move on to what is without doubt al-Haytham’s greatest achievement his Book of Optics, the most important work on physical optics written between Ptolemaeus in the second-century CE and Kepler in the seventeenth-century. Our author writes:

In his book, The Book of Optics, he was the first to disprove the ancient Greek idea that light comes out of the eye, bounces off objects, and comes back to the eye. He delved further into the way the eye itself works. Using dissections and the knowledge of previous scholars, he was able to begin to explain how light enters the eye, is focused, and is projected to the back of the eye.

Here our author demonstrates very clearly that he really has no idea what he is talking about. It should be very easy to write a clear and correct synopsis of al-Haytham’s achievements, as there is a considerable amount of very good literature on his Book of Optics, but our author gets it wrong[2].

Al-Haytham didn’t prove or disprove anything he rationally argued for a plausible hypothesis concerning light and vision, which was later proved to be, to a large extent, correct by others. The idea that vision consists of rays (not light) coming out of the eyes (extramission) is only one of several ideas used to explain vision by Greek thinkers. That vision is the product of light entering the eyes (intromission) also originates with the Greeks. The idea that light bounces off every point of an object in every direction comes from al-Haytham’s Islamic predecessor al-Kindi. Al-Haytham’s great achievement was to combine an intromission theory of vision with the geometrical optics of Euclid, Heron and Ptolemaeus (who had supported an extramission theory) integrating al-Kindi’s punctiform theory of light reflection. In its essence, this theory is fundamentally correct. The second part of the paragraph quoted above, on the structure and function of the eye, is pure fantasy and bears no relation to al-Haytham’s work. His views on the subject were largely borrowed from Galen and were substantially wrong.

Next up we have the pinhole camera or better camera obscura, although al-Haytham was probably the first to systematically investigate the camera obscura its basic principle was already known to the Chinese philosopher Mo-Ti in the fifth-century BCE and Aristotle in the fourth-century BCE. The claims for al-Haytham’s studies of atmospheric refraction are also hopelessly exaggerated.

We the have an interesting statement on the impact of al-Haytham’s optics, the author writes:

The translation of The Book of Optics had a huge impact on Europe. From it, later European scholars were able to build the same devices as he did, and understand the way light works. From this, such important things as eyeglasses, magnifying glasses, telescopes, and cameras were developed.

The Book of Optics did indeed have a massive impact on European optics in Latin translation from the work of Bacon in the thirteenth-century up to Kepler in the seventeenth-century and this is the principle reason why he counts as one of the very important figures in the history of science, however I wonder what devices the author is referring to here, I know of none. Interesting in this context is that The Book of Optics appears to have had very little impact on the development of physical optics in the Islamic Empire. One of the anomalies in the history of science and technology is the fact that as far was we know the developments in optical physics made by al-Haytham, Bacon, Witelo, Kepler et al had no influence on the invention of optical instruments, glasses, magnifying glasses, the telescope, which were developed along a parallel but totally separate path.

Moving out of optics we get told about al-Haytham’s work in astronomy. It is true that he like many other Islamic astronomers criticised Ptolemaeus and suggested changes in his system but his influence was small in comparison to other Islamic astronomers. What follows is a collection of total rubbish.

He had a great influence on Isaac Newton, who was aware of Ibn al-Haytham’s works.

He was not an influence on Newton. Newton would have been aware of al-Haytham’s work in optics but by the time Newton did his own work in this field al-Haytham’s work had been superseded by that of Kepler, Scheiner, Descartes and Gregory amongst others.

He studied the basis of calculus, which would later lead to the engineering formulas and methods used today.

Al-Haytham did not study the basis of calculus!

He also wrote about the laws governing the movement of bodies (later known as Newton’s 3 laws of motion)

Like many others before and after him al-Haytham did discuss motion but he did not come anywhere near formulating Newton’s laws of motion, this claim is just pure bullshit.

and the attraction between two bodies – gravity. It was not, in fact, the apple that fell from the tree that told Newton about gravity, but the books of Ibn al-Haytham.

We’re back in bullshit territory again!

If anybody thinks I should give a more detailed refutation of these claims and not just dismiss them as bullshit, I can’t because al-Haytham never ever did the things being claimed. If you think he did then please show me where he did so then I will be prepared to discuss the matter, till then I’ll stick to my bullshit!

I shall examine one more claim from this ghastly piece of hagiography. Our author writes the following:

When his books were translated into Latin as the Spanish conquered Muslim lands in the Iberian Peninsula, he was not referred to by his name, but rather as “Alhazen”. The practice of changing the names of great Muslim scholars to more European sounding names was common in the European Renaissance, as a means to discredit Muslims and erase their contributions to Christian Europe.

Alhazen is merely the attempt by the unknown Latin translator of The Book of Optics to transliterate the Arabic name al-Haytham there was no discrimination intended or attempted.

Abū ʿAlī al-Ḥasan ibn al-Ḥasan ibn al-Haytham is without any doubt an important figure in the history of science whose contribution, particularly those in physical optics, should be known to anybody taking a serious interest in the subject, but he is not well served by inaccurate, factually false, hagiographic crap like that presented in the article I have briefly discussed here.






[1] Throughout this post I will refer to Islamic science an inadequate but conventional term. An alternative would be Arabic science, which is equally problematic. Both terms refer to the science produced within the Islamic Empire, which was mostly written in Arabic, as European science in the Middle Ages was mostly written in Latin. The terms do not intend to imply that all of the authors were Muslims, many of them were not, or Arabs, again many of them were not.

[2] For a good account of the history of optics including a detailed analysis of al-Haytham’s contributions read David C. Lindberg’s Theories of Vision: From al-Kindi to Kepler, University of Chicago Press, 1976.


Filed under History of Optics, History of Physics, Mediaeval Science, Myths of Science, Renaissance Science

Jesuit Day

Adam Richter (@AdamDRichter) of the Wallifaction Blog (he researches John Wallis) tells me that the Society of Jesus, known colloquially as the Jesuits, was officially recognised by Pope Paul III on 27th September 1540. He gives a short list of Jesuits who have contributed to the history of science over the centuries. Since this blog started I have attempted to draw my readers attention to those contributions by profiling individual Jesuits and their contributions and also on occasions defending them against their largely ignorant critics. I have decided to use this anniversary to feature those posts once again for those who came later to this blog and might not have discovered them yet.

My very first substantive post on this blog was about Christoph Clavius the Jesuit professor of mathematics at the Collegio Romano, the Jesuit university in Rome, who as an educational reformer introduced the mathematical sciences into the curricula of Catholic schools and universities in the Early Modern Period. I wrote about Clavius then because I was holding a lecture on him at The Remeis Observatory in Bamberg, his hometown, as part of the International Year of Astronomy. I shall be holding another lecture on Clavius in Nürnberg at the Nicolaus Copernicus Planetarium at 7:00 pm on 12 November 2014 as part of the “GestHirne über Franken – Leitfossilien fränkischer Astronomie“ series. If you’re in the area you’re welcome to come along and throw peanuts.

I wrote a more general rant on the Jesuits’ contributions to science in response to some ignorant Jesuit bashing from prominent philosopher and gnu atheist A. C. Grayling, which also links to a guest post I wrote on Evolving Thoughts criticising an earlier Grayling attack on them. This post also has a sequel.

One of Clavius’ star pupils was Matteo Ricci who I featured in this post.

A prominent Jesuit astronomer, later in the seventeenth-century, was Riccioli who put the names on the moon. I have also blogged about Chris Graney’s translation of Riccioli’s 126 arguments pro and contra heliocentricity. Chris, a friend and guest blogger on the Renaissance Mathematicus, has got a book coming out next year on The University of Notre Dame Press entitled Setting Aside All Authority: Giovanni Battista Riccioli and the Science against Copernicus in the Age of Galileo. It’s going to be a good one, so look out for it.

Riccioli’s partner in crime was another Jesuit, Francesco Maria Grimaldi, who features in this post on Refraction, refrangibility, diffraction or inflexion.

At the end of the seventeenth-century the Jesuit mathematician, Giovanni Girolamo Saccheri, without quite realising what he had achieved, came very close to discovering non-Euclidian geometry.

In the eighteenth-century a towering figure of European science was the Croatian Jesuit polymath, Ruđer Josip Bošković.

This is by no means all of the prominent Jesuit scientists in the Early Modern Period and I shall no doubt return to one or other of them in future posts.




Filed under History of Astronomy, History of science, Myths of Science, Renaissance Science

If you’re going to pontificate about the history of science then at least get your facts right!

Recently, my attention was drawn to an article by Pascal-Emmanuel Gobry, on The Week website, telling the world what the real meaning of ‘science’ is (h/t Peter Broks @peterbroks). According to Mr Gobry science is the process through which we derive reliable predictive rules through controlled experimentation [his emphasis]. This definition is of course totally inadequate but I’m not going to try and correct it in what follows; I gave up trying to find a simple all encompassing definition of science, a hopeless endeavour, a long time ago. However Mr Gobry takes us on a whirlwind tour of the history of science that is to say the least bizarre not to mention horribly inaccurate and in almost all of its details false. It is this part of his article that I’m going to look at here. He writes:

A little history: The first proto-scientist was the Greek intellectual Aristotle, who wrote many manuals of his observations of the natural world and who also was the first person to propose a systematic epistemology, i.e., a philosophy of what science is and how people should go about it. Aristotle’s definition of science became famous in its Latin translation as: rerum cognoscere causas, or, “knowledge of the ultimate causes of things.” For this, you can often see in manuals Aristotle described as the Father of Science.

The problem with that is that it’s absolutely not true. Aristotelian “science” was a major setback for all of human civilization. For Aristotle, science started with empirical investigation and then used theoretical speculation to decide what things are caused by.

What we now know as the “scientific revolution” was a repudiation of Aristotle: science, not as knowledge of the ultimate causes of things but as the production of reliable predictive rules through controlled experimentation.

Galileo disproved Aristotle’s “demonstration” that heavier objects should fall faster than light ones by creating a subtle controlled experiment (contrary to legend, he did not simply drop two objects from the Tower of Pisa). What was so important about this Galileo Moment was not that Galileo was right and Aristotle wrong; what was so important was how Galileo proved Aristotle wrong: through experiment.

This method of doing science was then formalized by one of the greatest thinkers in history, Francis Bacon.

Where to start? We will follow the Red King’s advice to Alice, “Begin at the beginning,” the King said, very gravely, “and go on till you come to the end: then stop.”

Ignoring the fact that it is highly anachronistic to refer to anybody as a scientist, even if you qualify it with a proto-, before 1834, the very first sentence is definitively wrong. Sticking with Mr Gobry’s terminology Aristotle was by no means the first proto-scientists. In fact it would be immensely difficult to determine exactly who deserves this honour. Traditional legend or mythology attributes this title to Thales amongst the Greeks but ignores Babylonian, Indian and Chinese thinkers who might have a prior claim. Just staying within the realms of Greek thought Eudoxus and Empedocles, who both had a large influence on Aristotle, have as much right to be labelled proto-scientists and definitely lived earlier than him. Aristotle was also by no means the first person to propose a systematic epistemology. It would appear that Mr Gobry slept through most of his Greek philosophy classes, that’s if he ever took any, which reading what he wrote I somehow doubt.

We then get told that Aristotelian “science” was a major setback for all of human civilization. Now a lot of what Aristotle said and a lot of his methodology turned out in the long run to be wrong but that is true of almost all major figures in the history of science. Aristotle put forward ideas and concepts in a fairly systematic manner for people to accept or reject as they saw fit. He laid down a basis for rational discussion, a discussion that would, with time, propel science, that is our understanding of the world in which we live, forwards. I’m sorry Mr Gobry, but a Bronze Age thinker living on the fertile plains between the Tigris and the Euphrates is not coming to come up with the theory of Quantum Electro Dynamics whilst herding his goats; science doesn’t work like that. Somebody suggest an explanatory model that others criticise and improve, sometimes replacing it with a new model with greater explanatory power, breadth, depth or whatever. Aristotle’s models and methodologies were very good ones for the time in which he lived and for the knowledge basis available to him and without him or somebody like him, even if he were wrong, no science would have developed.

Gobry is right in saying that the traditional interpretation of the so-called scientific revolution consisted of a repudiation of Aristotelian philosophy, a point of view that has become somewhat more differentiated in more recent research, a complex problem that I don’t want to go into now. However he is wrong to suggest that Aristotle’s epistemology was replaced by reliable predictive rules through controlled experimentation. Science in the Early Modern Period still has a strong non-experimental metaphysical core. Kepler, for example, didn’t arrive at his three laws of planetary motion through experimentation but on deriving rules from empirical observations.

Gobry’s next claim would be hilarious if he didn’t mean it seriously. Galileo disproved Aristotle’s “demonstration” that heavier objects should fall faster than light ones by creating a subtle controlled experiment (contrary to legend, he did not simply drop two objects from the Tower of Pisa). Aristotle never demonstrated the fact that heavier objects fall faster than light ones; he observed it. In fact Mr Gobry could observe it for himself anytime he wants. He just needs to carry out the experiment. In the real world heavier objects do fall faster than light ones largely because of air resistance. What Aristotle describes is an informal form of Stokes’ Law, which describes motion in a viscous fluid, air being a viscous fluid. Aristotle wasn’t wrong he was just describing fall in the real world. What makes Gobry’s claim hilarious is that Galileo challenged this aspect of Aristotle’s theories of motion not with experimentation but with a legendary thought experiment. He couldn’t have disproved it with an experiment because he didn’t have the necessary vacuum chamber. Objects of differing weight only fall at the same rate in a vacuum. The experimentation to which Gobry is referring is Galileo’s use of an inclined plane to determine the laws of fall, a different thing altogether.

We now arrive at Gobry’s biggest error, and one that produced snorts of indignation from my friend Pete Langman (@elegantfowl), a Bacon expert. Gobry tells us that Galileo proved Aristotle wrong: through experiment. This method of doing science was then formalized by one of the greatest thinkers in history, Francis Bacon. Galileo’s methodology of science was basically the hypothetical deductive methodology that most people regard as the methodology of science today. Bacon however propagated an inductive methodology that consists of accumulating empirical data until a critical mass is reached and the theories, somehow, crystallise out by themselves. (Apologies to all real philosophers and epistemologists for these too short and highly inadequate descriptions!) These two epistemologies stood in stark contrast to each other and have even been considered contradictory. In reality, I think, scientific methodology consists of elements of both methodologies along with other things. However the main point is that Bacon did not formalise Galileo’s methodology but produced a completely different one of his own.

Apparently Mr Gobry also slept through his Early Modern Period philosophy classes.




Filed under History of science, Myths of Science