Lispian Random meanderings on whatever catches my fancy

Lispian
Watching the KLOCs

I’ve been a computer scientist for over 30 years. That’s a long time no matter how you cut it. And across those years I’ve come to accept somethings, such as the fact that the difference in capability between any two given computer scientists can be an order of magnitude. I’ve also come to accept that no matter how hard we try the number of errors we have in our code in terms of thousands of lines of code (KLOCs) has stayed pretty much constant. I believe this is for a number of reasons but foremost is that for whatever reason we seem compelled on developing languages that are ever more verbose.

Now, I know a lot of people thing more verbose languages are easier to program in. And I can agree, initially. But longer term, once you’re well versed in the art of being a computer scientist languages just get in your way. You spend more time arguing with the language and the compiler than you do getting your algorithms down. To me, the purpose of computer science is to use tools that assist you, not impede you. What you want is what I’ve called “frictionless programming”.

The reason I’ve been thinking about this again is that years ago I gave a talk on how to write better software. I was always an interactive programming type of guy — I loved Lisp from the moment I first coded in it, and the Lisp Machine remains my favourite programming environment ever — but what I never understood is this insanity that grips CS types that seem to constantly want more and more verbose languages to code in.

It seems pretty much accepted that the number of bugs per thousand lines of code (KLOC) has remained rather static (or consistent if you want a positive word for it). So, it would seem that all the various new programming paradigms have not eliminated or reduced bugs per thousand lines of code, so the notion of writing more code would be anathema. But it’s not. We see languages like Java where folks write 10x to 100x more code than in some other languages, such as Lisp or Ruby or Haskell. That would obviously equate to 10x – 100x more bugs. Why would any sane organization want to do this? It still escapes me. You’d think we’d go for more concise, dense languages since the more verbose languages have been an utter failure in terms of writing less buggy code.

Which brings me to an old note I wrote years ago when I was thinking about programming languages for a research project I was doing in the early 2000s. My thinking  revolved around:

  1. Text processing
  2. XML processing
  3. Data as code, code as data
  4. Efficiency: programmer and program execution
  5. Expressiveness
  6. Model (i.e., service vs. monolithic)
  7. Concise syntax

And after thinking long and hard I started to realize that the mistake we’ve been making as a field is moving away from languages like Lisp, Prolog and APL and towards C++ and Java. I know there are a lot of reasons why we moved, but it seems looking back that the move may not have been for truly sound reasons. Sure, Lisp, Prolog and APL are harder to learn but they’re much more expressive. The only “modern” language that falls into that same bin that isn’t Lisp-like is Smalltalk, and maybe Haskell.

The problem, I believe, is what I’ve long termed “Managing To InfoWeek”. I used to argue that too many managers would simply opt for a new technology because they read about it in some computer magazine. The other issue is the notion of using a single language, because that somehow “makes sense”. I’ve often quipped that the best analogy for that type of thinking is if you opted to build a house you’d try it with just a hammer. When that failed, you’d switch to just screwdrivers. Then saws. Then drills. Etc. Each time realizing the job might get done but it was a dog’s breakfast.

Again, too many folks who don’t get that you should use the best tool. You should pick languages that suit the sub-problems that comprise the solution you’re trying to build.

In other words, you don’t want to end up with teams stuck programming in a language whose only redeeming features are that it is:

  1. fairly well known
  2. has sufficient tools
  3. works on the target platform

Typically a) trounces everything else. If you have a team of Java programmers, guess what? You’re programming in Java. B) is less important, and c) more so than b).

Today, I believe that only two things matter: brevity/expressiveness of the language and a service-oriented model. Who cares where it runs, just so long as it runs. And if you truly use a services model you can speed up the system by offloading components to additional hardware: optimization through hardware purchase. Besides, computers are super fast, so any minor 10% or so loss in execution efficiency is more than made up for by being able to prototype and get it out so much faster. New hardware will easily eliminate that 10% slowdown.

So I’ve been thinking of some of the problems I’ve worked on and I’m thinking that for most of those problems the language was immaterial. What you want is to build something fast that works. You wanted a language that suits the problem and that, once you became reasonably proficient, gets the hell out of your way.

In many cases you want to sit down and hammer out some code quickly — effectively sketch it out — and then try it out. You want your problem broken down and written as pieces. Each piece can communicate with every other piece as necessary. Thus, you can have a server, a client (as a test harness), and a Horne Clause engine, say. Each piece would be written in a concise language, perhaps 100 lines of code each. Each can be easily tested and understood. And with a sufficiently concise language each component would be 10 – 100 times smaller than similar functionality coded in Java or C++.

Perhaps others have no issue with hammering out a few hundred thousand lines of code but I’m not in the mood. I never have been and never will be. It’s just not a productive use of my time.

But what is interesting is that I’m not the only one having this “aha” moment. Nope. A lot of other folks are as can be seen by the sudden surge in interest in functional programming. I’m reading about people checking our Lisp (or Clojure). Others are trying out Haskell. Some are even checking out languages such as Mathematica and R, realizing that writing a few lines of code and getting your answer is much better than writing thousands, tens of thousands or hundreds of thousands and then fighting bugs.

Yes, languages such as Haskell, R, Clojure/Lisp, Prolog, etc. are “hard”. But life is hard. And once you learn a concise, powerful, expressive language you can solve problems more quickly with less code. And less code means fewer bugs since we know that the bugs per KLOC has remained the same for decades.

As Einstein said, doing the same thing over and over again and expecting a different result is the surest sign of insanity. And isn’t that just what we in computer science have been doing for decades: writing more and more verbose languages, with tighter and tighter type systems while watching the bugs/KLOC value stubbornly stay put?

So maybe the renewed interest in functional programming will also be a renaissance in conciseness, in realizing that there is a benefit to knowing a complex representation/syntax and applying it to problems. Mathematicians have known this for years, but us computer types seem to have refused to comprehend it. But maybe things are starting to change. I remain hopeful but that overhang of legacy code may be too daunting for the majority allowing but a fortunate minority to throw off the shackles of verbosity and look into the beauty of concise, efficient, expressive languages to tackle what is only going to be more and more complicated problems that will reward those quick to a solution as opposed to those with the most KLOCs to a solution.

Comments are closed.

September 2012
M T W T F S S
« Aug   Oct »
 12
3456789
10111213141516
17181920212223
24252627282930