This week’s experiment was completely unnecessary and I have no regrets.
Claude Code has a hook system — small scripts that fire on lifecycle events like session starts, tool calls, blocked commands, and agent spawns. I’ve been using hooks for logging and safety checks. Useful, sensible, boring.
Then I pulled the audio from a Portal 2 installation.
What I Tried
The event map wrote itself once I started listening to the assets. GLaDOS greets you when a session opens — her flat, clinical welcome felt right for a tool that’s about to read your entire codebase. When a subagent spins up, turret deployment audio plays. When that agent returns with results, the Aperture elevator chime. When a command gets blocked by a safety rule, a defective turret speaks — that broken, apologetic little voice.
Ten events. Ten audio files. One routing script checks which event fired and plays the matching sound in the background, out of the way of actual work.
The system mostly worked exactly as designed. Except one part required a real workaround.
Each Hook Handles Its Own Consequences
The hook that fires before a tool call can see what’s about to happen. The hook that fires after can see what did happen. But neither can see what the other decided.
So “command blocked” has no dedicated event. When the pre-call hook blocks a command, it returns a blocking response — but the post-call hook never fires for that event, and there’s no separate “was blocked” signal for a third hook to catch.
The only way to play the defective turret audio on a block was to put the sound call inside the blocking hook itself. The script that decides to block the command also plays the audio. They’re the same script now.
It’s a small thing. But it changed how I think about hook design — each hook needs to own its side effects. You can’t assume another script will handle the consequences of your decision.
Absence of Event Is Also Information
The audio is a joke. The underlying observation isn’t.
Every tool system has gaps in its event model. Some transitions — “this was rejected,” “this timed out,” “this was skipped” — don’t emit events. They just don’t happen. If you want to respond to those states, the response has to live inside the decision logic, not downstream from it.
This comes up outside audio hooks too: logging, metrics, alerting. If the thing you want to observe is the absence of a normal event, you’re not going to catch it with a listener. You’ll need to instrument the decision point.
GLaDOS agrees this is the correct approach. She said so when I opened the session this morning.