About

I'm Richard — based in Belfast, working in higher education. My background (loosely) spans instructional design, software development, IT infrastructure, and audio and visual production. I don't specialise in one thing; I (mostly) work at the edges where those disciplines meet.

What this is

A working notebook. The Notes section is the core of it: a personal wiki on AI tooling and method, compiled from primary sources rather than written from memory. Projects, Experiments, and Gallery are the surrounding work — finished things, in-progress things, and visual work.

Most pages cite their sources. Where a popular claim turns out to be misattributed or exaggerated, the page says so and traces the actual source. The discipline that makes this honest is the discipline of writing the wiki at all.

Process

Each Notes entry follows the same pipeline. A bookmark becomes raw material, raw material becomes verified reading, verified reading becomes a synthesised entry.

Capture Twitter/X bookmarks via Dewey export. ~4,700 in the working archive.
Organise Lands in an Obsidian vault, tagged by category and topic, dated by post.
Read Anything cited gets traced back to the source paper, post, or repo. Web clips, papers, conversation exports go to raw/.
Verify Hyperbolic claims get checked. Where a quoted figure isn't in the source, it's flagged or dropped.
Compile Synthesised into either an Article (500–1500 words, narrative) or a Concept (100–300 words, definition). Wikilinks tie them together.
Publish Only entries marked status: published appear here. Drafts stay in the vault.

The result is content with two unusual properties for an AI-adjacent site: every claim is traceable, and the citation chain is short. Most online writing on the same topics inherits its facts from other writing rather than from the underlying paper or thread; this is a deliberate counter-pattern.

Drafting

Most longer articles here are drafted in collaboration with an LLM — usually Claude. It's worth being honest about what that means in practice, because "AI-assisted writing" gets used to describe several quite different workflows.

Capture· human — which bookmarks matter, which threads belong together
Organise· script — auto-tagging from a fixed taxonomy; human reviews edge cases
Read· human reads originals; model holds and summarises context across long sources
Verify· human — citation checks, tracing claims back to the underlying paper, flagging hyperbole
Draft· model proposes structure and prose against the style guide; human edits, redirects, rewrites
Publish· human — final review, status flip from draft to published

The sourcing decisions, the verification, and the publishing gate are all mine. The model's contribution is the middle layer: reading several long sources at once, proposing a structure, generating draft prose that follows the wiki's voice, suggesting cross-links to existing entries. Articles typically go through several rounds before reaching draft, and several more before published.

Some articles arrive the other way round. A passage in something I'm reading surfaces a conceptual link; I follow it in conversation with Claude, and the article emerges as the residue. It can feel less like writing than excavation. Karpathy in 2026 came together this way — the three-part frame wasn't planned, it surfaced mid-conversation.

This whole workflow is the same compilation pattern the wiki documents — most directly in The LLM Knowledge Base and Karpathy in 2026. The site is, partly, an existence proof of the methodology it describes.

What's not here

Colophon

Stack
Astro, deployed on Cloudflare Pages
Search
Pagefind (static, full-text)
Source
Markdown in an Obsidian vault, synced into the site at build time
Type
Helvetica Neue / Helvetica

Contact

Reachable at hello@pixelbrix.com. RSS feed at /rss.xml.