Home.Bud: 105 Files in One Session
A whole-home inventory app — 105 files, 18,025 lines, 37 API routes — built and deployed in a single sitting. Scope crept hard. Here’s what actually happened.
I Don’t Know What I Own
Adam Savage has this bit about workshop organization— “first order retrievability.” Every tool has a place. Every place has a label. You know what you own and where it lives. I watched that, looked around my house — a 38-gallon reef tank with $2k in equipment, power tools in the garage, electronics scattered across three rooms — and realized I genuinely could not tell you what I own without walking room to room.
Home.Bud is part of the .budfamily — small single-purpose apps that scratch one itch. This one tracks everything in the house: items, locations, bins, maintenance schedules, lending, insurance values. The kind of app where after a pipe burst or a theft, someone asks “what did you actually have?” and you can answer it.
Ship It Before the Energy Fades
The initial build was a sprint. Next.js 15 + better-sqlite3 + Drizzle ORM + Capacitor for iOS. SQLite because the data is local-first — I don’t need a server database for a household inventory. Drizzle because it generates migrations and the type safety is worth the setup cost.
By the end of the first session: core item CRUD, location and bin hierarchy, FTS5 full-text search, basic auth, deployed to Sandtrap (my home server). 13 components. A mapper layer converting snake_case DB columns to camelCase for the frontend. A schema covering items, locations, bins, and aliases.
The numbers: 92 source files, roughly 16k lines. It worked. Add an item, assign it to a location and bin, search across name/brand/model/serial/notes, browse by room. Foundation was solid.
Then I kept going.
Then I Kept Going
I won’t walk through every phase — brainstorming sessions with Claude, rapid fixes, testing as I went. All in the flow. But the feature list grew fast. Phase 2 added card catalog view, room sweep mode, location cheat sheets, item aliases (so “TV” finds “Samsung Frame”), and home screen stats. Phase 3 brought insurance tracking with PDF export, a lending/borrower system, maintenance scheduling, receipt uploads, walkthrough photo sessions. Phase 4 was duplicate detection and merge.
By the end: 105 source files. 18,025 lines. 37 API route handlers across 15 resource groups. 13 React components. A dozen database tables plus an FTS5 virtual table. The schema had grown from 3 tables to include lendingHistory, borrowers, maintenanceEvents, roomSweeps, sweepPhotos, and photoQueue.
Each phase felt like “just one more thing.” Insurance tracking? That’s the whole point of knowing what you own. Lending? I lend tools constantly and never remember who has what. Maintenance scheduling? The reef tank equipment alone has a dozen recurring tasks. Did I need all of this? Probably not. But every addition made sense in the moment, and the thing I ended up with is a pretty different app than what I set out to build.
Search That Actually Works
SQLite’s FTS5 is underrated. The virtual table covers name, brand, model, notes, serial number, location name, bin name. Prefix matching with term*syntax. Content-sync triggers (custom SQL outside Drizzle — the ORM doesn’t handle FTS natively) keep the index in sync on every insert, update, delete. Search is instant. Sub-5ms on the full dataset.
Dedup uses two string similarity algorithms: Levenshtein distance (counts character edits — catches typos) and Jaro-Winkler (weights matches at the start of strings — catches abbreviations). Configurable thresholds: exact at 1.0, high confidence at 0.85+, medium at 0.7+. When merging, the richer record wins — whichever has more fields populated becomes the primary. UI shows a side-by-side comparison with one-click merge.
Both features exist because the data entry problem is real. You’re adding items while walking through the house, phone in hand. You type “DeWalt drill” one day and “Dewalt 20v Drill/Driver” the next. Without fuzzy search and dedup, the database fills with ghosts.
Bugs That Bit
Ship fast, ship bugs. These are the ones that actually got me.
bcrypt vs. bcryptjs
Started with bcryptfor password hashing. Compiled fine on my Mac. Broke in the Docker build on Sandtrap — native Node modules and base images don’t always agree. The fix: bcryptjs, a pure JavaScript implementation. Slower for brute-force resistance benchmarks, identical for a household inventory app with one user.
Headers That Silently Vanish
Next.js changed how middleware sets response headers between minor versions. My auth middleware was injecting a user ID header that API routes read for session info. After an update, the header stopped propagating. No error. Routes just saw an unauthenticated request.
This is the kind of thing that makes “just upgrade” dangerous. The middleware API surface looks stable, but the internal behavior of header forwarding changed quietly. Lost an hour before I thought to check the release notes.
Invisible Newlines in Env Vars
Env vars pasted into deployment dashboards or .env files can pick up trailing newlines. For most variables, harmless. For a database path or an auth secret used in string comparison.. silent failure. The value looks right. process.env.SECRET returns the string plus an invisible \n.
// The fix is embarrassingly simple
const secret = process.env.AUTH_SECRET?.trim();CSP Hashes Go Stale
Content Security Policy with inline scripts requires SHA-256 hashes of each script block. Change one character and the hash is invalid. Browser silently blocks it. No console error in production — just a broken feature. I spent time debugging “why did this stop working” before realizing the CSP hash was stale.
Every one of these bugs has the same shape: works locally, breaks in production, fails silently. No error message, no crash — just wrong behavior. The lesson isn’t “test more.” It’s that deploy targets (Docker, Capacitor, CI) each introduce failure modes local dev can’t simulate. Treat the first deploy as a debugging session, not a victory lap.
Snake Case In, Camel Case Out
SQLite columns are snake_case. React wants camelCase. Drizzle can map these, but I wanted an explicit transformation layer — one that also handles currency conversion (cents in DB, dollars in UI) and date formatting.
Every API response goes through mappers.ts. New tables need mapper functions or the API returns raw snake_case and the frontend breaks in confusing ways — TypeScript doesn’t catch casing mismatches at the boundary.
Phase 1 decision that paid for itself later. When lending landed in Phase 3, the pattern was already there. New table, new mapper, consistent API shape. Without it I’d have been chasing purchase_date vs. purchaseDate mismatches across 37 routes.
Wrapping It for iOS
The web app deploys to Sandtrap (Docker on a home server). iOS uses Capacitor to wrap the same codebase for TestFlight. Bundle ID: com.fort.homebud. Same database, same API, native shell.
Capacitor adds surprisingly little friction for a Next.js app. The gotcha is SQLite — better-sqlite3 is a Node native module that doesn’t run in a Capacitor WebView. The iOS build uses Capacitor’s SQLite plugin instead, which means the database layer has a platform switch. Not ideal. Works though.
Currency fields stored as integers (cents) is a well-known pattern, but it caught me twice. The CurrencyInput component displays dollars. The database stores cents. The mapper converts. When any of those three layers disagrees, you get items listed at $45,000 instead of $450.00. Always multiply/divide in the mapper, never in the component.
Was It Worth the Sprint?
Phase 1 in a single session — worth it. Having a working app with search, CRUD, and deployment before the motivation faded is the difference between a finished project and a TODO.md file. I’ve got a graveyard of those.
Phases 2–4 were scope creep, but the useful kind. Each feature showed up because I hit a real friction point while actually using the app to inventory my house. The lending system exists because I lent my neighbor a circular saw and forgot. The maintenance scheduler exists because my reef tank’s protein skimmer needs cleaning every two weeks and I was tracking it on sticky notes.
105 files now. Bigger than it needs to be. But everything in it gets used, and that’s not nothing.