I've been experimenting with AI code generation for a side project written in Golang. The project has been implemented by Opus 4.6 (Claude Code) under my direction. This is the first time I've used Golang so I'm pretty slow and can't scrutinise the output as thoroughly as I could PHP. I've been thinking a lot about security. Are there processes we can follow to reduce risk, when working with machine-generated code? I think so. My high-level process has been to:

  • Have a discussion with the model about a feature or a change, to identify a good approach. It often comes up with better ideas or refinements.
  • Explicitly ask for an implementation plan, causing the model to break up the problem into a structured series of small steps, which I sanity check (read) and adjust.
  • Ask the model to implement the plan (if it is complex, perhaps one phase at a time).
  • Manually check that the change functions as expected.
  • Explicitly ask the model to review the changes and evaluate if it is a robust solution (repeat if necessary).

This process works well for two reasons. Firstly, it breaks up the work into small, carefully scoped chunks that fit within the model's context window, keeping it focussed. Secondly, the review aspects (the manual check, and instruction to critically review the work) removes a lot of bugs, so you maintain a solid foundation to work from. Most of the time Opus will find a few bugs in its implementation, if you ask it to check, and it may take two or three rounds before it stops finding problems.

I have a new project nearing completion which is based on Golang, and with a Postgres backend. TLDR I wanted to add a compiled language to my skillset so that I could produce fast binary executables.

I settled on Golang because it is modern, memory safe (mostly) and provides highly efficient built in webserver functionality. Goroutines have a tiny memory footprint, fast start up time, and low CPU overhead, all from a single small compiled binary. Compared to Apache2 with its endless configuration options and complexity, it's quite a relief to deal with.

And Golang has not disappointed me. The efficiency gains are real and will allow me to deploy onto minimal hardware, thereby directly saving money. Even on a Raspberry Pi 5 development box (yes, really) my web app runs like lightning and has shockingly low CPU and memory footprints.

But there is a downside, and this is where PHP has the advantage: Maintenance. If a PHP site has a problem you can often login while it's running, poke around a bit and fix it, without much concern that you will torch the entire system. The files are human readable text, so modifying one or reverting a bad change is basically instant with limited blast radius. You can do emergency maintenance on the road from a tablet or even a phone.

Around the end of March there were widespread reports of a sudden jump in token consumption by Claude Code, mainly with Opus. People started burning through their usage limits in minutes, when previously they had hours.

This wasn't a problem for me, but I heeded the 'mitigation' advice and removed all plugins, skills, agents, and MCPs to minimise context injection. I also audited my configuration using the Context Audit skill you can download from Brad | AI Automation.

Around mid-April Anthropic claimed to have fixed it. Well, no. They haven't. I started experiencing the problem as soon as my usage reset and I had access to Opus 4.7, even though I reduced the effort to 'medium' from the default 'xHigh'.

It's terrible! Previously I could carefully steward my session limit through two or three hours of code work with Opus. Today? About 30 minutes and with a far smaller volume of work achieved.

It's insanely slow, ridiculously so. To get the files off quickly, just mount the transmitters as storage (Mac) and drag and drop the files onto your desktop. It's literally hundreds of times faster. If you plug the case into your computer with the transmitters inserted, they will mount automatically. (I presume that on Windows you can just open them as storage through File Explorer).

TLDR: Recommended for Raspberry Pi 4b...if you don't have issues with the USB connector (mine seems defective, which is a possible dealbreaker). Excellent construction but fan is noisy at high loads; can mitigate with an improved fan control script (provided). The S2Pi Aluminum NAS case provides a rugged housing for the Raspberry Pi 4b with M.2 SSD storage and an Ice Tower heat sink for strong cooling performance. It's an excellent package for upgrading your Pi to a lightweight server.

I have developed an improved fan speed control script that turns the fan off when not needed, and ramps with CPU temperature. Available for download within.

Well, it works great. Very cool. And free, yay!

I recently got dunked on for saying the Raspberry Pi 5 makes a great home lab server if you equip it with an SSD drive. And I don't really blame the guy, because until the Pi 4b, they were pretty awful, and for the 3B and below you were stuck with running the OS from a microSD card. His mental model was probably stuck somewhere around there.

The Pi 5 is a huge level up in performance, especially once you add SSD storage via its PCIe slot.

A minor patch to fix a bug in collection pagination.

A minor maintenance release to harden the WebAuthn service class.

After a terrifying and deadly monster escapes from the SCP Foundation, Inspector Arlia interviews Doctor Daniels, the man responsible for containing the creature. At first Daniels uses the interview to critique and insult the powerful and mysterious 0-5 Council. However, as Arlia shares the horrifying details of the breach, Daniels becomes more and more fearful of the question everyone wants answered: how, exactly, did this happen? By MrKlay.