I needed a branded intro bumper for my YouTube videos. Two to four seconds of something between the hook and the content. Something that says “this is a series” without screaming “I bought a Fiverr intro.”
The normal path: open After Effects, or download a Motion template, or pay someone on Envato. All fine options. But I was already in a Claude Code session planning the video, and I thought — can we just generate this?
Turns out: yes. One Python file, about 300 lines, and the whole thing renders from the terminal.
What It Does
The script generates a 4-second branded bumper at 30fps. The concept is “tuning in” — visual noise that resolves into a clean branded frame.
The timeline:
| Time | What happens |
|---|---|
| 0.0–0.8s | Colored noise blocks fill the screen — random rectangles in your brand palette on a dark background |
| 0.8–1.8s | Noise contracts toward center and dissolves. Your name and avatar fade in behind it |
| 1.8–2.3s | A thin accent line draws left-to-right beneath the text, like a signal locking on |
| 2.3–4.0s | Clean hold. Your branded frame sits on screen for two full seconds |
It renders both landscape (1920x1080) and portrait (1080x1920) from the same code, so you get a Shorts-ready version for free.
The Prompt
Here’s what I actually said to Claude Code to get this built. I didn’t write a spec first — we brainstormed it in conversation.
We need a nice Signal Over Noise bumper to go onto my YouTube videos from now on
That kicked off a back-and-forth. Claude asked about vibe (I picked “quiet confidence”), text content, sound design, and generation approach. The conversation narrowed it down to:
- Concept: Visual noise resolving into a clean branded frame
- Text: My name, centered, with a circular avatar
- Audio: Radio static crossfading into a clean tone (or an external music track)
- Tech: Python + Pillow for frame generation, ffmpeg for encoding
From there, Claude wrote a design spec, then an implementation plan, then dispatched subagents to build each piece — scaffold, frame rendering, audio generation, ffmpeg encode, and a polish pass. About an hour, start to finish.
How It Works
The whole thing is one file: generate.py. It has four moving parts.
1. Noise generation. Each frame calculates how many colored rectangles to scatter across the canvas. Early frames: 40% coverage, random placement. As the animation progresses, density drops to zero and blocks get pulled toward center using a Gaussian distribution. The effect is noise contracting and dissolving.
2. Text and avatar compositing. Starting at 0.8 seconds, your name and avatar fade in using alpha compositing on an RGBA layer. Landscape mode puts them side-by-side. Portrait mode stacks them. The circular avatar crop is done with a Pillow ellipse mask.
3. Audio. Two options. The default generates a radio-tuning sound programmatically — white noise that crossfades into a clean 280Hz tone using a smooth-step easing function. Or you pass --music track.mp3 and it trims and fades any audio file to fit. I’m using a CC0 ambient track from Unminus.
4. Encoding. All frames get rendered as PNGs to a temp directory, then ffmpeg composites them with the audio into an H.264 MP4. Temp files get cleaned up automatically.
Fork It
You need Python 3, Pillow (pip install Pillow), and ffmpeg (Homebrew: brew install ffmpeg).
To make it yours, change these constants at the top of the file:
# Your brand colors (RGB tuples)
BLACK = (26, 26, 26) # Background
CREAM = (247, 244, 234) # Text color
TEAL = (27, 154, 170) # Accent (lock line)
BURNT_ORANGE = (239, 99, 81) # Noise color
SAGE = (136, 171, 142) # Noise color
NOISE_COLORS = [TEAL, BURNT_ORANGE, CREAM, SAGE]
# Your font (any .ttf file)
FONT_PATH = os.path.expanduser("~/Library/Fonts/YourFont.ttf")
# Your avatar image
AVATAR_PATH = os.path.expanduser("~/path/to/avatar.jpg")
And change the text on this line:
text = "Jim Christian" # <- your name or channel name
Then run it:
python3 generate.py # Both landscape + portrait
python3 generate.py --aspect landscape # Just one
python3 generate.py --music your-track.mp3 # With external audio
Output lands in output/bumper-landscape.mp4 and output/bumper-portrait.mp4. Drag into Final Cut Pro, Premiere, DaVinci — it’s a standard H.264 MP4.
What I’d Change Next
The generated audio is serviceable but synthetic. A real ambient track from Unminus or Pixabay Music sounds better — that’s why the --music flag exists.
The noise block animation is random, which means it looks slightly different every time you regenerate. I seeded the random number generator for reproducibility (random.seed(42)) but you could remove that for variation.
And the hold at the end is two seconds. That felt right for my pacing, but it’s just a constant at the top of the file:
DURATION = 4.0 # Total length in seconds
The Actual Point
This took an hour. Not an hour of writing Python — an hour of describing what I wanted and iterating on the result. The first pass had a synth sweep that sounded too clinical. Swapped it for radio static. The text said “Signal Over Noise” but my channel name had changed. Swapped it. Wanted my avatar in the frame. Added it.
Each change was a sentence in the conversation, not a rewrite.
The script is 300 lines of code that I’ll run once every few months when I want to tweak something. It’s not elegant software engineering. It’s a creative tool that happens to live in a terminal instead of a timeline editor.
If you have Claude Code (or any AI coding tool with terminal access), you can build one of these in a session. Describe the animation you want. Let it generate frames. Preview. Iterate. The overhead of “learning After Effects” disappears when your description is the interface.