Haupz Blog

... still a totally disordered mix

Reasons for Failure

2025-11-22 — Michael Haupt

Amy Edmondson has an interesting spectrum of reasons for failure. From most blameworthy to most praiseworthy, it starts with deviance (intentional violation of process), and ends with exploratory testing (honourable experimentation with existing systems - “how can I break this?”).

Edmondson’s work on psychological safety at work has already done much good. Her latest research is about failure. I like the gist of it, because it’s quite aligned with how I’ve always felt about failure - it’s OK to fail as long as you own the failure and learn from it. A very healthy approach.

Tags: work

Platform Team Interaction Modes

2025-11-22 — Michael Haupt

This here article by Martin Fowler introduces a collection of collaboration patterns between platform teams and other teams. I found it very interesting because the patterns are presented in a framework that is organised along phases of interactions, and notes how the roles of the team that drives work and the team that stewards a codebase shift.

In brief, platform teams usually have internal customers - i.e., other teams -, which sets them apart from feature or product teams. In collaborations between platform and internal customer teams, one is usually driving the work, which is associated with having an interest in delivering it. The other (or, indeed, the same) team is the steward for the codebase that the work needs to be done in.

In the platform migration phase, the platform team drives changes to steward teams’ codebases to enable some new platform capability. During platform consumption, when product teams simply use platform capabilities, those teams are both drivers and stewards. Finally, during platform evolution, product teams drive changes they need in the platform stewards’ codebases.

Each of these phases necessitates different forms of collaboration. The article uses a clear pattern language to illustrate those. I’m not going to reproduce everything in detail here, but will strongly advise readers who are interested in shaping such collaborations to give it a look.

Tags: work

Minimal Abstraction, and CS Amnesia

2025-11-14 — Michael Haupt

Here’s a trip down memory lane. I was reading Stephan Schmidt’s essay on Radical Simplicity, found it quite agreeable and myself nodding hard, and had a strong sense of déjà vu. Those thoughts sounded familiar. Mind you, not in a plagiatory sense at all - I was actually really happy to see that, years and who knows what amount of distance apart, we had come to similar conclusions.

I’d like to share two posts I wrote on a long defunct blog, back in 2009. The first is about a concept I like to call (and have always called) “minimal abstraction”, which is very very similar to Radical Simplicity; the second is a somewhat necessary prerequisite and prequel to the first. I reproduce them here without changes, some comments are added [in square brackets], and I’ve provided better links to other places where possible.

I find it interesting how my thinking hasn’t changed much in the 14 or so years since I wrote these things. If anything, I’ve become more grumpy about those things, because several cycles of the criticised nature have happened since.

Minimal Abstraction

Ask yourself: don’t you often have the feeling that your brand-new 1024-core desktop SUV with 4 TB RAM and hard disk space beyond perception takes aeons to boot or to start up some application? (If the answer is no, come back after the next one or two OS updates or so.)

I don’t want to rant about any particular operating system or application — the choice is far too big. Still, honestly, one thing I am often wondering about (and I guess I’m not all alone) is why modern software is so huge and yet feels so slow even on supposedly fast hardware.

All those endless gigabytes of software (static in terms of disk space consumption and dynamic in terms of memory consumption) and all those CPU cycles must be there for a purpose, right? And what is that purpose, if not to make me a, well, productive and hence happy user of the respective software? There must be something wrong with complexity under the hood.

During that conversation [see below] about Niklaus Wirth with my friend Michael Engel in Dortmund which got me started about computer science’s tendency to regard its own history too little, and during which I had also mentioned my above suspicion, Michael pointed me to an article by Wirth, titled A Brief History of Software Engineering, which appeared in the IEEE Annals of the History of Computing in its 2008 July–September issue. This article contains a reference to another of Wirth’s articles titled A Plea for Lean Software, which was published in IEEE Computer in February 1995.

The older article phrases just the problem I pointed out above, in better words than I could possibly use, and it did so more than a decade ago. So it’s an old problem.

Here’s a quotation of two “laws” that Wirth observes to be at work: “Software expands to fill the available memory. … [It] is getting slower more rapidly than hardware becomes faster. …” According to Wirth, their effect is that while software grows ever more complex (and thus slow), this is accepted because of advances in hardware technology, which avoid the performance problems’ surfacing too much. The primary reason for software growing “fat” is, according to Wirth, that software is not so much systematically maintained but rather uncritically extended with features; i.e., new “stuff” is added all the time, regardless of whether the addition actually contributes to the original idea and purpose of the system in question. (Wirth mentions user interfaces as an example of this problem. His critique can easily be paraphrased thus: Who really needs transparent window borders?)

Another reason for the complexity problem that Wirth identifies is that “[t]o some, complexity equals power”. This one is for those software engineers, I guess, that “misinterpret complexity as sophistication” (and I might well have one or two things in stock to be ashamed about). He also mentions time pressure, and that is certainly an issue in corporate ecosystems where management does not have any idea about how software is (or should be) built, and where software developers don’t have any idea about user perspective.

I must say that I wholeheartedly agree with Wirth’s critique.

Digression.

Wirth offers a solution, and it’s called Oberon. It’s an out-of-the box system implemented in a programming language of the same name, running on the bare metal, with extremely succinct source code, yet offering the full power of an operating system with integrated development tools. One of the features of the Oberon language, and also one that Wirth repeatedly characterises as crucial, is that it is statically and strongly typed.

Being fond of dynamic programming languages, I have to object to some the ideas that he has about object-oriented programming languages and typing.

  • “Abstraction can work only with languages that postulate strict, static typing of every variable and function.”

  • “To be worthy of the description, an object-oriented language must embody strict, static typing that cannot be breached, whereby programmers can rely on the compiler to identify inconsistencies.”

Well, no.

My understanding of abstraction (and not only in programming languages) is that it is supposed to hide away complexity by providing some kind of interface. To make this work, it is not necessary that the interface be statically known, as several languages adopting the idea of dynamic typing show. Strict and static typing in this radical sense also pretty much excludes polymorphism, which has proven to be a powerful abstraction mechanism. (Indeed, Wirth describes what is called “type extension” in Oberon, which is called “inheritance” elsewhere.) It is correct that static strict typing allows for compilers to detect (potential) errors earlier, but abstraction works well and nicely with languages that don’t require this.

It is puzzling to read that an OOP language must be statically and strictly typed to be rightfully called an OOP language. Ah, no, please, come on! Even as early as 1995, there were programming languages around that one would have greatest difficulties to classify as not being OOP languages in spite of their being dynamically typed. Moreover, it is an inherent property of living systems (which the object-oriented paradigm has always strived to capture) that objects in them assume and abandon roles during their lifetimes—something which to capture statically is hard.

Finally, it is interesting to note that the successor of the Oberon system [a system called Bluebottle; the link to the original page is defunct] features a window manager that supports possibly semi transparent windows. Do you see the irony in this?

End of digression.

As stated above, I really share Wirth’s opinion that there is too much complexity in software, and I believe this is still true today. What can be done about it? Regarding operating systems, we depend on diverse device drivers even more than a decade ago, so we need a certain degree of abstraction to allow operating systems to talk to different hardware. Regarding convenience and user experience, the occasional bit of eye candy makes working with systems undoubtedly more comfortable. We should still ask ourselves whether it’s really, really necessary though, and perhaps concentrate on the really important things, e.g., responsiveness.

So what to do? I don’t really have a definitive answer, but I believe that the idea of minimal abstraction is worth a look. The “minimal” in the term does not necessarily mean that systems are small. It means that the tendency to stack layers upon layers of software on top of each other is avoided.

Minimal abstraction is the principle at work in frameworks such as COLA (a tutorial is [no longer, sorry] available or in the work on delegation-based implementations of MDSOC languages I kicked off with [the late] Hans Schippers. I also believe that the elegance and (in a manner of speaking) baffling simplicity of metacircular programming language implementations (more recently, such as Maxine) are definitely worth a look.

I am sure it is possible to avoid complexity as we have to observe it today, and to make software more simple, better understandable and maintainable, and I believe the above is a step in that direction.

Computer Science and “Geschichtsvergessenheit”

Yesterday, a friend working at a German university told me over ICQ that for most of his students the name Niklaus Wirth didn’t ring a bell. I was mildly shocked, and we ranted (ironically) a bit about today’s students’ being undereducated and ignorant and all. Eventually, we came up with a quickly and superficially assembled list of some more persons that we think one should know if they’re into computer science: Alan M. Turing, Grace Hopper, Ada Lovelace, Edsger W. Dijkstra, David L. Parnas, and Konrad Zuse. Some of these might have been chosen based on personal preference, but most of them undoubtedly have made significant contributions to computer science.

Let’s face it: the practical outcomes of academic computer science tend to reoccur in cycles. Distributed systems of yore are somehow residing in the SOA/grid/cloud triangle these days, and concepts that have long been known are re-introduced and hyped with all the marketing power of globalised corporations. While this is typical for industry, it’s unsettling that academia jumps on the bandwagon almost uncritically, generating massive amounts of publications at high speed that don’t actually tell anything new. The seminal papers that have been published, in some cases, decades ago are mostly not even referenced in these.

I believe this is not a deliberate choice of the authors of said papers. Any academic worth their share will strive to give relevant related and previous work due credit. So what is it that brings this about? Why is computer science so geschichtsvergessen (unaware, if not ignorant, of (its own) history)?

Is it because many, too many, universities focus on teaching students the currently hyped programming language? Is it because education at academic institutions too often and too strongly concentrates on creating industry-compatible computer scientists operators? Is it because computer science education is not designed to be sustainable?

The above questions can, more or less obviously, be answered with yes — and that is sad. Not because students don’t know the names of people that helped shape computer science in its early days; that, one could do with. It is much more problematic that ignorance (be it deliberate or not) of previously achieved important, crucial results leads to too much work being done over and over again. It’s reinventing the wheel on a large scale.

Most academic disciplines I know of introduce their students to the historical background and development of their subject early in the curriculum. Students of economic science learn about mercantilism, Smith, Keynes, and Friedman early on; and prospective jurists are soon faced with the Roman legal system and its numerous influences on contemporary legal systems. Why does a computer science curriculum start, ironically exaggerated, with a darned Java programming course?

It’s the teachers’ job to change this. Still, they often themselves don’t know their ancestors (and I am not an exception myself). Information is available. Two pointers that spring to mind are these:

  • Friedrich L. Bauer’s small volume “Kurze Geschichte der Informatik” (sorry, I don’t know if it’s available in English) connects computer science to its roots in mathematics and philosophy and depicts its historic development until the early 1980s (sadly, it stops there).

  • The volume “Software Pioneers” edited by Manfred Broy and Ernst Denert collects reprints of seminal papers by various computer science pioneers. It comes with 4 DVDs (!) containing videos of talks of most of these persons, who were gathered at a Software Pioneers Conference in Bonn (Germany) in 2001.

Please, let’s not forget where we come from, aye?

Tags: hacking

A True Power Tool

2025-11-12 — Michael Haupt

As a not so frequent but all the more enthusiastic user of a power drill, I have, for a long time, not quite liked the fact that I always need someone with me to operate a vacuum cleaner to take care of the drill dust. There simply had to be a better way.

So during one recent visit to my natural habitat (the German word is “Baumarkt”), I decided to look for solutions. Indeed, I found one - it’s a device like this (not exactly the one, but surely making the same patent holder happy). In a nutshell, it’s a thing that you put on the end of the vacuum cleaner hose, and that has two separate intakes for air. Through one, which is round and small, you put the drill, and it will directly ingest the dust where it is evicted from the hole. By means of the other intake, which is large and oblong, this miraculous device sucks itself to the wall, so that no one needs to hold the vacuum cleaner. Marvellous.

I had had no idea such a thing existed, and am grateful to the universe (and the device’s inventor).

Tags: the-nerdy-bit

Mainframes

2025-11-12 — Michael Haupt

Let’s say I know that mainframes exist (likely, they process my snail mail, and a fair share of monetary transactions I benefit from). I also vaguely understand that they’re different.

This article gives a good high-level overview of these valiant beasts. I’m thoroughly impressed by the concept of logical partitions, and by the z/OS operating system. Also, it’s interesting to see how a decades-old architecture has continued to progress through the times, and kept the pace with relevant developments. COBOL and Java running side by side must be a sight to behold.

Tags: the-nerdy-bit

Markdown Links in MacVIM

2025-10-30 — Michael Haupt

Some text editors have that nice feature that if you want to turn a span of text into a link, you select it, and when you then paste a URL, the editor will create that link in place. My favourite text editor is MacVIM, and I thought it’d be nice to have that feature there as well. Since most of the text I edit in MacVIM is Markdown, selected text needs to be converted into a link following the syntax rules.

Now, MacVIM can be extended (of course). As I didn’t know the language well enough, I thought I’d vibe code it with some help from Claude Sonnet, which I use in my Langdock workspace. (To be clear, I did not use Claude Code, but plain old Claude with a canvas.)

Because I wanted to be able to understand (and assess the quality of) what would come out of this, I started by reading a nice and compact primer on the vim language. That didn’t take long, and after a brief pause to appreciate the quirkiness of the language, I got started.

Phase 1: Building the Thing

The first result looked right. However, it didn’t work at all - hitting Cmd-V on selected text with a URL in the clipboard just pasted the URL instead of the selected text, which was the normal behaviour I had intended to replace.

A few exchanges later, during which Claude gave rather helpful advice for troubleshooting and debugging, and even added code to produce debugging output, I realised that I had made a mistake in my first prompt: I had mentioned I wanted a vim plugin, but had not mentioned I was using MacVIM. That got me much closer to the goal.

Phase 2: Shrinkwrapping the Thing

I ended up with a version of the plugin that did what I wanted, however it still produced debugging output. So I instructed Claude to take away all the fluff and be minimalistic about the original requirement.

The result fell back to just replacing selected text with the pasted URL. In shrinkwrapping the plugin, Claude had also removed MacVIM specifics that were required to make it work. One instruction later, that was fixed.

However, now it didn’t get the spaces around the freshly created link right. That took one more round, after which pasting of non-URLs was broken. One instruction later, the plugin suddenly contained a syntax error. Once that was addressed, the thing worked as expected (for good).

Phase 3: Understanding the Thing

I don’t think vibe coding should be a discipline where humans delegate everything lazily to an LLM. So I started a new Claude session, fed it the code, and instructed the model to explain it to me. With my entry-level understanding of the vim language, I should be able to appreciate what was going on. That worked well.

Finally, I asked the second Claude session what could be improved about the code. Because LLMs are “eager to please”, I instructed it to avoid digging for “improvements” just for the sake of it, but to apply a fair assessment.

The result was good; there was one actual duplication that made no sense, which hadn’t met my eye and which I subsequently removed. I chose to not adopt another suggestion about removing code duplication for register saving-and-restoring and replacing it with an extra abstraction.

Show me the Code!

All right, all right, here it is. Needless to say, the Markdown source of this post contains several links, all of which were created using the help of this plugin.

Tags: hacking

An LLM in Minecraft?

2025-10-20 — Michael Haupt

Of course, Minecraft is Turing complete, and can even run itself

… and now someone built a ChatGPT thing in Minecraft, and I honestly don’t know any more.

Tags: the-nerdy-bit

JFR, JMC - and Minecraft

2025-10-19 — Michael Haupt

JFR (Java Flight Recorder) is a tool I keep pointing to. That’s because it’s immensely useful - if it’s being used. There is a small hurdle to jump over, but then a lot of potential is unleashed.

If you need one last argument to be convinced, look at this: Minecraft itself, of all things, leverages JFR for analysis. The developers have created custom JFR events to be able to observe in-game circumstances of chunk generation and server traffic. Thanks to how JFR works, these events can be immediately visualised in accompanying tools, such as JMC (Java Mission Control).

Tags: the-nerdy-bit