Haupz Blog

... still a totally disordered mix

"AI" to Replace Programming

2024-03-02 — Michael Haupt

On Twitter, Grady Booch commented on a tweet mentioning that the Nvidia CEO had said that programming languages could now be replaced “with human language prompts thus enabling everyone to be a programmer” like so:

AI will kill coding in the same way that compilers killed coding. What kind of human language prompts would have sufficient detail, clarity, and precision to permit the creation of useful, evolvable, and maintainable executable assets? We call those "programming languages".

This resonated with me because I had been thinking about these very questions for a while. I commented on Grady’s remark with three short tweets myself, that I’d like to expand on a bit here (oh, character limits …).

The history of programming is one of adding layers upon layers of abstraction. Things started with punching cards and plugging cables, and then there was binary machine language and assembler. That was complicated and tedious, so higher programming languages and compilers entered the game.

These higher programming languages provided an abstraction from the machine level, making it easier for humans to express their intent for what the machine was meant to do in a more human readable and writable way. All the way from low-level languages like C to abstraction behemoths like SQL, the ability of these languages to express human intent whilst being easily processible by machines increased. SQL, in particular, reads a lot like English, if a highly stylised variant thereof. (Maybe even more so does COBOL.)

(Abstractions were added in other places, too, but things like operating systems and graphical user interfaces are not on topic here - this is about programming. Also don’t get me started on “low code” or “no code” so-called “programming” environments right now - maybe another day.)

The thing to keep in mind is that any abstraction inevitably hides away details of the lower level it abstracts over from the higher level it provides an interface to. Any abstraction thus inevitably introduces a gap between the intent humans express with its help, and the lower, more detailed, level. In a nutshell, that’s why we have bugs, and debugging: the “do what I mean” statement does not exist.

Now, AI. With SQL and COBOL, there are two programming languages that are somewhat close to human languages, but AI … you just ask it nicely what to do in plain ol' English, and there you are.

Right?

No: wrong. Wronger than you think.

It’s another layer of abstraction altogether, one that feigns human ability to understand intent by looking human. When instructing these AIs, it’s just like talking to another human. Only it isn’t: the AI does not understand, it does not get what the “programmer” means. It merely infers, using statistics (lots of it) an approximation of what a likely intent could be, and then statistically approximates a response.

What’s the last piece of bug-free software you used? Written with a “normal” programming language? Oh, sure, most of the time it works, until it doesn’t.

Programming is tricky already, and it’ll be more tricky with these AIs. Because it looks like plain ol' English, everyone and then some will believe they can just “prompt” the AI to “do what I mean” for them. However, the AI will miss human intent by more than the usual debuggable bit; the gap will be bigger. Since very few if any understand how those huge statistical networks we call AIs really work, debugging will be much, much harder.

Oh sure, most of the time it’ll work, until it won’t - and since the group of people “writing” this software will be orders of magnitude larger, and the group's expertise level orders of magnitude lower, the consequences will be amplified and more wide-spread.

Call me a pessimist, but I believe when this is the new programming model, the “do what I mean” statement will be farther away, not closer.

Tags: the-nerdy-bit