This is the final part of a 5-part series on agentic engineering. Part 4 laid out architectural principles for choosing languages and runtimes. This post puts a handful of worked examples through those principles, just to show what the analysis looks like in practice.

This isn’t a list of “the languages I use”, and it isn’t exhaustive. It’s a deliberate move away from personal taste — what I happen to know, what’s fashionable, what the team is already comfortable with — and toward something more objective: which language properties best fit the agent and the problem domain in front of you. Different problems will pull you toward different answers, and the examples below are picked specifically to make that point.

This is part 4 of a 5-part series on agentic engineering. Part 1 set the values-vs-mechanisms frame. Part 2 covered pairing and learning. Part 3 made the case for harness engineering. This one is about the architecture underneath the harness — the language, runtime, deployment, and version-control choices that make agents safe.

Part 5 gets concrete about specific languages.

Why architecture changes with agents

If humans are no longer the primary readers and writers of code, then architecture should no longer be optimised only for human familiarity. Instead, I want to optimise for:

This is part 3 of a 5-part series on agentic engineering. Part 1 made the case that we keep Agile values and change the mechanisms. Part 2 walked through what that looks like for pairing and learning. This post is about the thing that does most of the work once agents are doing the execution: the harness.

The reframe

We are no longer primarily writing code. We are building a factory for generating code safely.

This is part 2 of a 5-part series on agentic engineering. Part 1 made the case that Agile values still matter; the mechanisms are what change. This post takes the most contested of those mechanisms — pair programming — and looks at what happens to it when one of the “pair” doesn’t need a keyboard.

I’ve been a fan of pair programming for a long time. At Triptease we’ve leaned on it because it improves quality, accelerates learning, spreads context, builds resilience, and quietly does a lot of work for team cohesion. Those outcomes still matter.

For most of my career, “doing engineering well” has meant doing Agile well. Fast feedback, sustainable pace, simple design, close customer collaboration, all the values from the Agile Manifesto that I’ve spent twenty-odd years arguing about over coffee. At Triptease those values still pay rent: they help us deliver, adapt, collaborate, and not burn out.

What’s changing is who is doing the work.

The hard part of software has never been the typing. It’s always been translating fuzzy intent into a working solution: understanding the problem, choosing the design, working out what “correct” actually means, and keeping all of that consistent as things change. That’s where the real time has always gone.

Does CapsLock annoy you? Ever wished it actually did something useful instead of SHOUTING AT PEOPLE BY ACCIDENT?

Capsper is push-to-talk voice dictation for Linux. Hold CapsLock, speak, release. Text appears wherever your cursor is. No cloud, no subscription, no Electron app phoning home. Just your GPU doing what GPUs were meant to do.

The pitch

~2 second latency between speaking and words appearing on screen. Runs entirely on your machine. Works on both X11 and Wayland. Single binary, no dependencies to manage, no Python runtime, no Docker container. You download it, run the installer, and CapsLock becomes useful for the first time in its miserable existence.

After upgrading to an NVIDIA RTX 5070 Ti (Blackwell architecture, released January 2025), WebGL stopped working in Chrome. Sites like webglreport.com showed “This browser supports WebGL 2, but it is disabled or unavailable.”

The system-level OpenGL worked fine (glxinfo showed full OpenGL 4.6 support), so the issue was Chrome-specific.

The Problem

Chrome was running with --use-gl=disabled. You can check this with:

ps aux | grep chrome | grep -oE '\-\-use-gl=[^ ]*'

The RTX 5070 Ti and driver 580.x are so new that Chrome’s GPU blocklist doesn’t recognise them, so it defaults to disabling GL entirely.

A practical introduction to NTFs

- 3 mins read

I’ve been doing some NFT research for myself, but though I would share in case one else finds some value.

30 second intro to NFTs: https://niftygateway.com/whatisanifty

In depth description of an NFT (and lots more information): https://blog.opensea.io/guides/non-fungible-tokens/#What_is_a_non-fungible_token

So the next thing to understand is that in the world of cryptocurrencies there are many different ones. Bitcoin is probably the one you have heard about but there are literally thousands:
https://coinmarketcap.com/

The second biggest cryptocurrency is called Ethereum and it is also the largest market for NFTs (via https://opensea.io/ or https://niftygateway.com/)