Gottfried Leibniz and the origins of computational theory
Of the few minds straddling the divide between the arts and the sciences, even fewer deserve the title of Renaissance men to the same extent as Leonardo da Vinci. Others who were creative as scientists and artists on the same scale were, and are, rare. Modern examples include Bertrand Russell, who found considerable and lasting acclaim as a mathematician, philosopher and writer of fiction. But combining excellence in at least two, but only tenuously related fields, like Omar Khayyam, the 11th century Persian poet and mathematician remains an almost impossible feat.
This makes the protean riches to be found in the work of the 17th century philosopher and universal genius Gottfried Leibniz (1646-1718) extremely difficult to comprehend. It sounds fairly pretentious to say that 100 years of editing his work have proved insufficient to publish his enormous output. But it happens to be true. It is estimated that the complete edition of his works will ultimately surpass 100 volumes, and not even half has been made available to the 21st century reader.
No one alive can claim to be a specialist in the work of Leibniz, since just reading his published output, let alone understanding it all, would surpass the skills of almost all but the most dedicated scholars. The difficulty in understanding a high percentage of his published work does not lie in the intrinsic difficulty of his writing: it is no more or less difficult than that of his famed, and rather hostile contemporary, Isaac Newton. The difficulty lies in the scope of his work, extending beyond the limits of 17th century philosophy: it covers the beginnings of mathematical logic, calculus, algebra, theoretical and practical linguistics, some original historical work, extensive diplomatic activity, work in law and geology and a substantial amount of political writing.
But most importantly, Leibniz is a seminal figure in the theory of computation. He was probably the first philosopher and ranking mathematician - a feat even more admirable given the contemporary competition which includes Pascal, Euler and Descartes - who perceived the links between algebra and logic: when we read his letters and essays on logic, essays which were only rediscovered during the latter half of the 19th century, we can clearly see how Leibniz struggled to make the idea of a universal grammar underlying natural and artificial languages intelligible.
Representing the simplest rules governing transformations of algebraic equations, Leibniz and quite a few of his contemporaries realised that grammatical structures obeyed rules that looked analogous, if not identical to the ones ruling algebraic transformations. Logic and grammar seemed much closer in the days of 17th and 18th century rationalism when both shared fields a considerable amount of scientific vocabulary. Classical and medieval theories of grammar and logic still retained their Aristotelian heritage, thus enabling a philosopher to see links, but leaving it to a mathematician to prove them.
THE GRAMMAR OF THOUGHT
The dream of a universal language was and is extremely old: Sanskrit grammarians believed that Sanskrit was the language of the Gods and mastery of it paved the way to knowledge closed to speakers of impure languages. In a way, they regarded Sanskrit not as a universal language, as they regarded all other languages as derivations from it. Language, to them, was the means by which thinking and knowledge became pure.
Of course, the story of the Fall of Babylon reverberated in the minds of rationalist philosophers, whose education had been strongly influenced by the competing trends of Reformation and Counter-Reformation. The belief that all mankind used to speak one language and that a divine curse had caused them to break out in mutually unintelligible tongues would have been familiar to them. Others had tried to build universal languages, the 17th century Englishman John Wilkins being one of the most prominent.
But the need for a universal language had not only been perceived by scholars with an education in theology: the beginnings of European imperialism and the encounter with native American and East Asian languages had driven philosophers and linguists to deep reflection about the nature of language and symbolic systems.
The very idea of a symbolic alphabet reaching deeper into the mind than historically grown systems, like the Roman alphabet and in Leibniz's case, Chinese ideographs, was analysed for its usefulness.
Leibniz believed that underneath all human languages was a grammar of thought which could be regarded as separate from the appearance of thought in natural languages and historically grown symbolic systems. He believed that a symbolic alphabet could be invented, denoting the ultimate, indivisible units of abstract thought. The close parallel to Greek atomism, but applied to the world of ideas, rather than matter, was not intended on Leibniz's part, since the nature of Greek atomism only became a matter for debate after Leibniz's death. But it is an instructive parallel, since indivisibility and the combination of indivisible elements in more meaningful complexes lent itself to the simple explanation of complex phenomena, be they physical or purely mental. The laws of transformation would make up the grammar of human thought, unchangeable, reliable and possibly permitting the automatic solution of all problems accessible to the human mind.
A foundational symbolic alphabet and the laws of transformation that would turn the alphabet into a usable artificial language were, however, not to be considered laws of logic, for reasons which we will talk about shortly. The conceptual shorthand for this network of ideas, fragments of linguistic proto-science and raw 17th century generalisations is the grand idea of a universal language. This idea, often misunderstood and rarely explained, referred to an as-yet undiscovered artificial language, not the ideal languages redolent of projections of religious utopias into an imagined priestly Arcadia.
No previous examples of such a language were known to have come into existence. But there were suggestions that such a language might have been adumbrated in other cultures, even though such systems were usually regarded as the result of serendipity, not conscious philosophical reflection. Leibniz himself was intrigued by Jesuit reports on the Chinese writing system.
Considerable amounts of ink have been spilled about the amount of fairly reliable information Leibniz received from Adam Schall von Bell about Neo-Confucianism, the I Ching and the linguistics of Sinitic languages. The fact that Chinese ideographs seemed to carry individual meanings that could be combined without reference to their phonetic value intrigued Leibniz. Indeed, it seemed possible to read classical Chinese without reference to the language it represented, a feat not possible in any scripts created in Europe or the Middle East.
Leibniz was also intrigued by the presence of binary decision trees in commentaries on the I Ching, but saw the value of binary numbering systems mainly in philosophical systems, not in logic or mathematics. Leibniz, however, realised soon that the Chinese writing system did not fulfil the basic requirements for a universal language or a universal alphabet of thought. The reasons are varied and lead us directly to the basic question, namely why Leibniz was interested in the confluence between algebra, a grammar of thought and logic.
SYSTEM OF SIGNS
In Leibniz's early work, there is an essay dated to 1687 about the most general field of science possible: 'scientia generalis'. Leibniz's goal as the most enduring scientific endeavour possible was to write a general encyclopaedia of all sciences. Now it was regarded as quite impossible for one person to do so and Leibniz was aware of this opinion. He suggested that to be able to accomplish the goal in a cooperative enterprise of this magnitude, all scientists would have to find a general method applicable to all sciences, this method being called scientia generalis.
The most general scientific method possible was supposed to expose to the trained mind the foundational scientific concepts that could not be analysed any further; in some ways, modern axiomatic systems common in mathematics and physics mirror those ideas.
This left the question as to how to make foundational conceptual structures visible: a so-called 'characteristica universalis', roughly a system of unequivocal signs reflecting foundational conceptual structures. Leibniz alludes in various places to Chinese, Egyptian and scientific notational systems as good examples for real characters making up usable parts of a more general, as yet undiscovered system of signs.
In most classical interpretations of Leibniz's writings, there is a third part integrated into his work on theoretical linguistics and theory of human reasoning: Leibniz postulated something called the 'calculus ratiocinator'. In essence, Leibniz had asked how it would be possible to use his foundational system of symbols to function as a scheme to resolve scientific, legal and philosophical problems. He imagined that it would be possible to input problems into this particular calculus and by means of mechanical inference, a solution to the problem could be obtained. This is not identical to functions and notations chosen for a logical calculus - indeed, drawing inferences represent only a small part of logic. But the calculus ratiocinator was imagined by Leibniz not only to appear as something like a notational scheme, but also - and this is rather important - as something to be implemented as a machine.
Yes, Leibniz did not only guess at the role of algebra in the formation of modern logic, a guess all the more astonishing since Leibniz's familiarity with 17th century mathematics only improved in the 1680s and was scanty at best in his early life, but he also realised that certain versions of logic, which he called calculus logicus, shared characteristics with fundamental arithmetic. This again enabled him to see the parallel between mechanical calculators, which he seems to have built (the so-called Stepped Reckoner), and notational schemes depicting inferences using his characteristica universalis.
If much of it seems very familiar today, one should realise that we are talking about a late 17th century philosopher, not the 20th century Messrs Goedel and Turing trying to figure out whether the foundations of arithmetic are mechanically decidable and using a complex function representing a theoretical machine to solve the problem. That this question was answered in the negative, was not Leibniz's fault, but Leibniz's dream was acknowledged by figures as far apart as Frege, Goedel and in modern times, the computational theorist Gregor Chaitin.
The hardware equivalent of a Stepped Reckoner was created by Leibniz and several craftsmen. The machine, of which he seems to have built two versions, was able to carry out the four fundamental operations of arithmetic, which was an advance on Pascal's engine, capable of executing just addition and subtraction. But Leibniz encountered the same problems that ultimately scuppered Charles Babbage's ideas: when Babbage tried to implement and build his Analytical Engine during the 19th century, the parts could not be machined to the required degree of accuracy. Leibniz failed in his attempt to build a much simpler calculator for the same reason.
NEGLECT AND REDISCOVERY
Leibniz's achievements went a lot further. Although in his early work his idea of a characteristica universalis resembled a symbolic notation, his increasing interest in mathematics began to yield surprising results. He developed two algebras, one of which turned out to be a full algebra of sets and the other a form of propositional calculus anticipating Boole's version by 160 years. Both were supposed to play the role of parts of his calculus logicus, thereby providing transformational rules for the characteristica universalis.
Boolean algebra is vital in the development of programming language to this day; although given the fact that recent research has uncovered the full extent of Leibniz's achievement only during the last 15 years, we should perhaps rename it Leibnizian algebra. We should add one very general idea that Leibniz played with, although he wasn't quite as sure of its possible application as his admirer Babbage 120 years later: that of binary mathematics.
Originally an idea stemming from early Indian mathematics, Leibniz did not regard it as an idea relevant to his interests in mathematics, symbolic systems or calculators. He did, however, mention it in philosophical contexts, and speculated that his metaphysics might become much more intelligible if one applied binary trees - at least that's what we would call them these days - to the causal sequence of metaphysical entity production.
Most foundational figures in the history of computer science have barely recoverable biographies. Some are shrouded in mystery, others worked for institutions whose purpose was lying in the murky worlds of espionage and weapons development. Leibniz is a different figure simply because he is extremely important for the history of mathematics and philosophy. He published only two philosophical books in his lifetime and his views were praised, attacked, vilified and freely quoted; also his arguments with Newton about precedence in the creation of Calculus did his reputation immense harm, but ensured that he is mentioned in all histories of mathematics.
But this doesn't quite explain how Leibniz's ideas about symbolic languages, logic and calculating engines stayed in the background, while his philosophy became required reading for 1st and 2nd year philosophy students. For more than a century, people in the know had an idea that Leibniz had done interesting work on logic, but the great reformers of the practice of mathematical logic, De Morgan and Boole had little to no idea of his importance for logic and the theory of computation. Indeed, Gottlob Frege in his 1879 Begriffsschrift had used the concept of a characteristica universalis and Goedel in his later life believed, somewhat erroneously, that Leibniz's ideas on logic had been been suppressed for almost 200 years since they were too far ahead of their time. In 1960, one of the most widespread textbooks on the history of logic termed Leibniz's logic something best forgotten and at worst, a possible distraction for the likes of Boole and de Morgan, had they known about it.
But much of this neglect was caused by the absence of new developments in either mathematical or philosophical logic. Further, the explosion in communications and the convergence with information technology was only slowly becoming a possibility with the appearance of Babbage's engines and the appearance of a global phone network during the late 19th century.
Curiously, even 21st century historians of logic inherited a fairly severe prejudice against (hitherto unpublished!) writings of Leibniz on logic. Even more surprisingly, his ideas on semiotics and universal languages were regarded as unsound theoretical linguistics; it was only with the rise of theoretical computer science that some rather surprising praise began to be showered on Leibniz's early work. The linguist and political commentator Noam Chomsky admired greatly the logic and linguistics of the 17th century, although his views were not widely shared among linguists.
LEIBNIZ AS INSPIRATION
Leibnizâ€™s Discourse on Metaphysics shows an insight which was of some influence in the appearance of algorithmic information theory. In many ways rationalist enlightenment philosophy was a giant protest against the apparent randomness of natural processes.
Newton had set everyone's mind at rest to some extent by making the cosmos predictable, but he was Leibniz's contemporary, not his predecessor. We should perhaps add that until Madame Chatelet's translation of Newton's Principia into French, Newton's ideas and mathematics were not directly accessible to a wider audience in Europe: the language of educated conversation was French and written Latin was retreating to law and academia.
But a stunning and fairly unprecedented observation has been resurrected in recent years, and made Leibniz's work a goldmine for current scientific theorists. Leibniz said that for any finite set of points in a plane, regardless of whether they have been produced by an apparently random or law-like process, can be graphed in such a way as to be shown to be the product of an equation that produces a curve going through the same points. This seemingly innocuous sentence is followed by another observation, namely that a complex equation describing complex data isn't a law, since it does not possess the property of simplicity.
Conversely, if the equation is simple and the data complex, we are likely to have discovered a natural law. Now why is this important? The modern theorist of computation, Gregor Chaitin, has tried to make the notion of complexity far more precise than was previously attempted. The way in which he has been trying to do this is by using program size as a measure of complexity.
If a program is extremely short - and a program is little more than a particular kind of equation represented using a programming language - but the data it organises are very complex, we might have found an approximate measure of minimum complexity necessary to describe the data in question. If we succeed in making this measure of minimum complexity accurate and its laws provable, we have created a new metric for complexity and perhaps a more precise definition of natural laws. The maximum measure necessary to describe the data is obviously the size of the data itself. The question remains, then, as to how often a particular mechanical procedures has to be run to find an optimal way to analyse the data. The result is a mechanically produced theory. The means is a program containing a loosely-defined brute-force search procedure.
This is the insight Gregor Chaitin added to Leibniz's ideas, specifically emphasising that Leibniz's prescience was his inspiration. Leibniz's characteristica universalis in some ways is the intellectual predecessor of Turing's universal machine, which is exactly what we is needed to implement such a brute-search procedure. We have not reached the end of Leibniz philosophical insight.
"Moreover, you scorned our people, and compared the Albanese to sheep, and according to your custom think of us with insults. Nor have you shown yourself to have any knowledge of my race. Our elders were Epirotes, where this Pirro came from, whose force could scarcely support the Romans. This Pirro, who Taranto and many other places of Italy held back with armies. I do not have to speak for the Epiroti. They are very much stronger men than your Tarantini, a species of wet men who are born only to fish. If you want to say that Albania is part of Macedonia I would concede that a lot more of our ancestors were nobles who went as far as India under Alexander the Great and defeated all those peoples with incredible difficulty. From those men come these who you called sheep. But the nature of things is not changed. Why do your men run away in the faces of sheep?"
Letter from Skanderbeg to the Prince of Taranto ▬ Skanderbeg, October 31 1460
Diskutim profesional për gjuhën.
1 post • Page 1 of 1