Programming Sucks

I had sort of an epiphany while trying to debug some research code I've been working on: programming sucks.

I don't mean that it's not useful, or can't be enjoyable, just that it's frustratingly difficult, and as a field, computer science should be able to do better. Of course I don't know how, but I can point out some of the biggest flaws in how we create programs.

As a bit of context, the specific code I'm dealing with is physics-based animation research code written exclusively by me in C++, using a few well established libraries (OpenGL, Box2D) around the edges. I'll admit, I'm probably not the world's greatest coder, and if I told an experienced software architect how the system is supposed to work, he could probably rewrite my project in a maintainable and extensible fashion in a few days. But I didn't start with a detailed requirements document; I started with some ideas and a few vague directions, then implemented several failed strategies and bolted on extensions as they were needed. Given what I knew about the project when I started, I don't think there's any way I could have drawn up a UML diagram or written out the APIs I wanted to support, etc. Maybe someone else could have.

Since the 1980s, computer hardware, operating systems, networks, etc have changed dramatically, but the way programmers code is almost exactly the same. I write code in emacs, run/test/debug in a terminal, look at stack traces and poke at variables with gdb. What I've gained from 30 years of technological advancement is essentially faster feedback in the same write/compile/test loop that programmers have used since before I was born. Yes, I could use an IDE which puts a prettier face on the debugger (under the hood it's the same), might give me some hints about what I want to type or if I forgot a semicolon. But at the end of the day, the most modern developer tools are essentially lipstick on a 30+ year old pig.

The biggest issue I have with programming is that in my write/compile/test cycle, when something doesn't work as expected, it's really hard to understand why. As far as I know there is no language or tool that, given a large complex program can show a programmer "how it works." The path from "this doesn't work" to "this doesn't work because ..." is tough to travel. There's ways to help with the journey of course. Unit tests can help to help isolate some bugs to a small region of code, for example, but at the end of the day, the programmer has to be "lucky" enough to encounter a failure, and trace the execution either forward or backward until they discover something unexpected. That's hard to do.

So what can be done? I don't know. I assume smart people have been working on this problem for a long time, and nothing they've come up with has seemed to stick. Two thoughts: first, why can't the debugger help point out where things are going wrong? "The first 99 times this loop ran, the value of x was between 1 and 10, but now it's -342341. You might want to look at that."

Second, trying to discern meaning from a directory tree full of text files is horrible. A visual representation (at varying levels of detail) of how data flows through a program, what control flow looks like, or how state changes would be a huge step in the right direction.

With that off my chest, I return to my debugging, once more depressed that computers don't work in my life like they do in Tron.


Ben Jones Oct 2012