Friday, April 5, 2013

The Perfect Programming Language: Premise

This is part two of a series: The Perfect Programming Language.

In my previous article, I provide a short opinion-piece about how it is a problem that languages are not designed around the user-interface- that is, the programmer's UI for developing software is the language. In reality, the paradigm that you program in is a facade that helps you organize your thoughts in a way that can be understood by a compiler. A facade should not be designed by an engineer.

I didn't talk about LISP, Haskell, or Lua in the previous post, all of which deserve mention. LISP makes it very clear what is happening. Since all expressions are lists and all lists are expressions, the translation between programmer and compiler is very easy to understand. Unfortunately, the aesthetics of the language are terrible- it is, very ugly and symbolically verbose. Haskell is semantically beautiful, but the syntax and the appearance of the language are troublesome and complicated. Functions are expressions that are evaluated when they need to be, but we're building expressions upon expressions to achieve a greater goal, which can be conceptually difficult for a programmer to manage. Lua, in my opinion, solves the problems of both languages, but it does so with a few quirks. The learning curve is nonetheless very short, despite boasting a greater level of expressiveness than python and performance that can come close to C in a lot of cases. Functions are first-class in Lua as in Haskell, but expressions could also be thought of as Tables (the only aggregate data structure in Lua) in the way that Lists work in LISP. A lot of my inspiration comes from Lua, but I don't like how Lua's type system works. A table in Lua can be modified to work as a function, but we can not implicitly transcribe a function, or any other data type, to a table- despite the fact that all types appear to have similar qualities. A Number feels like a Table with a meta-table that has arithmetic operators- but that's not really how it works. This, to me, can generate some type inflexibility where it doesn't feel like there should be- or rather, the syntax of the language doesn't suggest that there should be. Similarly, we aren't quite using tables as expressions as we would be in LISP, but we get that sensation about 80% of the time. Lua's facade has a direct correlation to how it was engineered, rather than being designed around how it would look or feel. Therefore, many things look and feel the same, but act different.

Most programming languages provide us with a specific paradigm, but this isn't necessarily going to be the best way to solve a problem. The question is- do we really need a new language to handle the optimal paradigm for a specific problem? In most cases, a library or framework sufficiently ameliorates any differences, but the result can be a lot of bloat in regards to the quantity and variety of symbol usages and syntax. Another big issue can be the flexibility of whatever framework you're using. In most cases, for every program there is a unique paradigm that most effectively solves that problem. Any language that is designed around a general paradigm will, inherently, not necessarily include, as a subset, the unique paradigm that most effectively solves a particular problem. All problems could be solved within a single paradigm, but that paradigm isn't going to be as concise as the solution itself. The solving of a problem with programming is the accretion of a general paradigm into a specific one.

The question of solving a problem with programming starts with the meta-question of what paradigm to use. It follows that the most inclusive programming paradigm is one that attempts to answer the paradigm question directly. The implications of this are clear-- the ability to express grammar, types, data structures and memory management all implicitly through syntax thereby allowing a programmer to quickly and easily develop their own rules for their own particular problems. We also want to accomplish this in the simplest way that allows for the greatest amount of emergence. Want special rules for if-statements? There is no reason why 'If', as a keyword, should receive special treatment- it can be expressed either as a function, an expression, a statement, or a data structure. As a programmer, I want to be able to encapsulate all of these ideas within a single meta-paradigm- and it can all be done with a single type and unified syntax.
I'm not yet talking about the engineering aspect- but the User Interface. I want to think of all types as expressions that represent how aggregations of data are evaluated.

Consider the following:
int foo() { return 4; }
const int bar = 4;

There isn't a good reason why a compiler should make a distinction between these two ideas. There also isn't a good reason why a compiler shouldn't optimize them to inline, immutable, const, static, or whatever else would make them work better when translated to machine code. As such, I think it's important for such constructs to not exist in an explicit manner at all- relative to the UI. Since these two things essentially serve the same purpose, why am I bound by types and syntax with their usage? They're nothing more than expressions. Expressions that, when evaluated or referenced, describe a set of data. In this sense, the meta-language for our meta-paradigm only really needs a single type that describes expressions. Expressions need to be manageable in a way that makes sense to the programmer, and the idea behind vTables does just that. In this sense, we'll find that the best way to represent expressions is with tables, and the best way to represent tables is with expressions. Thus we have the atomic building block for our meta-paradigmatic meta-language-- the dual expression-table.

In the next article I will describe how, with just a single type, we can express some of the most important concepts in programming paradigms.


  1. You are right, there should be no distinction between the two, and that's an optimization that compilers can do these days. I know with the right annotations, GCC will convert calls to foo() to the literal value of 4 and Haskell will to the same without any further annotations.

    But even so, as written, GCC will *still* generate the code for foo(), and the "constant propagation" will only occur with code in the same file as the definition of foo(). Other files that call foo() will not see the same optimization because GCC cannot do such constant propagation at the linking stage.

    But change the definition to

    static int foo() { return 4; }

    and place it in a header file, and as long as no code takes the address of foo(), it will be inlined as a constant 4.

    Now, a language that does what you want, that hides the implementation of foo as either a function or a variable or a constant, Forth does that. You can define FOO as a function or a constant and as far as the rest of the program is concerned, FOO will always return 4. Now, this isn't done by the FORTH compiler per se, but it is easy to change the underlying implementation of FOO without materially affecting the rest of the program. _Thinking Forth_ is an excellent book on this subject (and it's free on the Internet, just look for it).

    1. Functional prograFunctional languages should make this optimization, but since most of the world works in an imperative world, I think it's important to point it out. You're right on the static foo(), but I think that is poor Affordance. It works as intended, but, to me, it lacks elegance. A compiler should determine this and a language should expect this. Clear rules about how optimizations work is an important part of the language.

      I stumbled across Forth recently and will definitely read that book! I actually read the entire Lua specification in one sitting because I found it so thrilling.mming languages