Showing posts with label Programming Languages. Show all posts
Showing posts with label Programming Languages. Show all posts

Incompleteness Theorem and the Value of Programming Paradigms

Monday, August 12, 2013

Crisis of Mathematics Foundation


Recently, I am reading about computational mathematics and its history. One interesting topic is the crisis of mathematics' foundations in the early 1900s. This summary is my interpretation. In this episode, mathematician David Hilbert becomes worried that his field's foundations, proof by logic and axioms, is crumbling. Why does he think so? People have found paradoxes, unexplainable situations which arise by following his basic set of tools. This opens his field attack, to be called 'nothing more than useful tools', rather than 'essential truths of the universe'. Everyone wants to spend their time on subjects of importance, right?

So, if his field's basic tools, its axioms and theories, were complete and true, the mechanics of any one thing could be explained and calculated. However, simple paradoxes are found which his field can't explain. For example, the barber paradox is a logic puzzle which is an application of Russell's paradox, in set theory. Set theory mathematicians build theories which can explain the relationships between everything. If you ask a set theorist to explain the barber paradox, he will start to question whether his system's basic tools 'make sense'.

So you can see that if the usefulness of the basic tools of a mathematician's field are questioned as illogical, he will worry that he is spending his time building a useless system and search to extend his rules to cover this logical hole.

Perceived Crisis of Programming Tools


Does this story sound similar to you? I'm a computer programmer, and I often read blogs and such from other programmers. If you don't know, most programmer's discussion topics are "language X is better than language Y", or "tool X is better than tool Y". This is not bad, since a good programmer always asks "Can I do this better?". Of course he must listen when another programmer says "This way is better".

So, what's the answer? Which language is best? There is so much discussion on this topic, so someone surely has found an answer. Sadly, no. If you ask a rational and experienced programmer, he will have decided that "The right language depends on the problem you are solving".

Kurt Gödel's Logical Discovery


Back to the world of mathematics. Hilbert, like a good programmer, wanted to work with a perfect set of tools, a perfect set of axioms and theorems. What is a "perfect" set of axioms? It is a consistent and complete set, such that they can be composed to explain anything. So, Hilbert is asking for a single set of terms, a single language, to use for all mathematics. Is this possible?

To answer Hilbert, mathematician Kurt Gödel proved something interesting about mathematical logic. He showed that inherent limitation exist of any set of tools, that is, any system of axioms used for natural number arithmetic. In his incompleteness theorems, he proved that if you create a set of all consistent and true axioms, it can not explain all truths. That is, as a user of these axioms, there will always be statements which you know are true, but you can not prove them using your set of axioms.

(Side-note: This theorem has been referenced as an axiom to prove many things, from 'god exists and you need faith' to 'math and logic is useless'. However, I believe I am correct to say that these useages are incorrect because the logic in Gödel's theorem means the theorem can only be applied to systems of arithmetic and in the context of systematically generating a complete set of axioms.)

Judging Legitimacy of New Languages


How is this related to programming paradigms? Consider this extrapolation I made (possibly illogical, but still interesting) from Gödel's idea. Suppose: I have a statement which I know is true, but I can't prove it using our set of axioms. What to do? After looking closely, I see that I can slightly modify one of our axioms to create a new one, thereby enabling me to prove my statement. Can I do this? Is my new branch of math a legitimate one? I say yes; if it can be used to correctly describe the problem in a cleaner, more consistent way, then yes, its legitimacy is decided by its practicality and usefuless.

This story isn't wholly based in my imagination. Consider Hyperbolic geometry, which is a relatively new field of mathematics. It takes classical geometry, modifies a single axiom (changing the definition of parallel), and calls it a new kind of math. This new system of geometry is very practical, as it provides tools to solve previously-undescribable problems. Some of these problems are listed in this discussion.

Changing a Single Axiom in Programming Languages


How awesome. By changing a single basic assumption, a new way of thinking is developed to easily explain difficult things. In my opinion, this is exactly the criteria we should use to decide whether a new programming language should be used. A useful programming language should obey a single practical philosophy, that is, a single set of axioms.

For instance, consider Clojure. Traditional programming is based on manipulating variable values through a series of subroutines. Based on years of experience, the creator of Clojure decided that this idea of mutable state is really bad for modern applications, which is more and more using concurrently-running processes. Clojure restricts a programmer to immutable state, which theoretically makes writing programs using shared state safer and easier.

Another example is Erlang. Traditional programming is based on creating one executable program or process to run on a single machine. The practical world of software development created the notion of software services, such as web services, which should be always available. The creators of Erlang decided this idea is essential. By constructing their language around this core idea, they decided that pure functions and fault-tolerant processes are implied essences. By restricting a programmer to use only these ideas, Erlang programs theoretically have few concurrency-based bugs and very little downtime in production.

Final Thoughts


Difficult problems can become trivial problems if we change a single rule of the game. Applied to programming languages, I believe this requires more than adding a library to support this idea. Essential to injecting a new paradigm into a language is restricting the parts of the language which run contrary to the paradigm. John Carmack mentioned his recent research on functional programming languages in his recent talk at his QuakeCon 2013 keynote talk. He supports this idea of restricting a programmer when following a new paradigm:

    "Everything that is syntactically legal that the compiler will accept will eventually wind up in your codebase."

    "Languages talk about multi-paradigm as if it's a good thing, but multi-paradigm means you can always do the bad thing if you feel you really need to."

To liberally interpret his words, tool designers care about supporting a certain way of problem-solving, whereas programmers care about quickly and directly solving the problem at hand.

Changing Object Functionality in OOP Languages

Sunday, December 11, 2011

A while ago, I had a conversation with a coworker about the maintainability of code and objects aspects of software craftsmanship. Suppose I make a method and think it's perfect. Inevitably, this method will have to change its functionality, probably due to change in requirements. In cases like this, how should one approach this refactoring? I just came across the concept of the "open-closed" principle, conceived way back in 1988.
en.m.wikipedia.org/wiki/Open/closed_principle
I originally was influenced by the idea of immutable data structures, mostly due to listening to Rich Hickey praising them in his talks on Clojure and it's concepts. Immutable data structures are interesting because they can be shared among different methods and threads safely. Immutability helps remove any surprises caused by method/function side effects. Though I haven't run into this problem before, it's a bit very attractive concept because it encourages simplicity in reading and maintaining existing code. Simplicity is king in the land of software development.
The open-closed principle, on the other hand, promotes simplicity in method/API design by promising to not change the API and its implementation. This reduces the chance of creating bugs in the callers of your methods. If the implementation of a unit of code must change, the open-closed principle instructs us to subclass and override the unit of code we wish to change. This sounds great and makes our code immutable, in a sense, but it seems to be forgetting about about another core principle of programming, which is the wonderful typing system.
This improvement/customization to the open-closed principle occurred in the 1990s, according to wikipedia. The revised version of the principle instructs us to use interfaces or abstract classes and create new implementations of them, rather than directly subclassing the class to change. This is a much better solution, as interfaces are a more precise manner of expressing functionality and purpose of an object.
It may be a challenge, and but we should remember to use the tools that our languages provide us. We should remember to use interfaces where appropriate.