Why I Built a Home Inventory System (And Why Notion Is the Database)
Every household has a knowledge problem. You own hundreds of things, stored in dozens of places, and the only index is your memory. I built a system to fix that.
The Problem No One Talks About
We moved to Jakarta with three kids, a shipping container of belongings, and the certainty that within six months we'd forget where half of it went. We were right.
"Where are the watercolors?" became a daily refrain. Then it was the sewing kit, the spare phone chargers, the kids' rain boots. Every search meant opening bins, checking shelves, and asking the same unanswerable question: did we even keep that?
The real cost wasn't the searching. It was the re-buying. We replaced things we already owned because nobody could confirm whether we had them or where they were. That's when I decided to build something.
Notion as the Database
I didn't want to spin up Postgres for a home project. I wanted something my wife could open on her phone, something visual, something that wouldn't require me to be the sysadmin forever. Notion checked every box.
The data model is four linked databases. Each one is mapped to a typed schema in TypeScript — here's the inventory item definition:
// src/notion/schemas/inventory-item.schema.ts
export const InventoryItemPropertyMap = {
categories: 'Categories',
condition: 'Condition',
doNotRestock: 'Do Not Restock',
id: 'ID',
location: 'Location',
name: 'Name',
photos: 'Photos',
stockOut: 'Stock Out',
} as const;
export const InventoryItemTypeMap = {
categories: 'relation',
condition: 'select',
doNotRestock: 'checkbox',
id: 'unique_id',
location: 'relation',
name: 'title',
photos: 'files',
stockOut: 'checkbox',
} as const;Locations are richer — they pull in container details via rollup fields:
// src/notion/schemas/inventory-location.schema.ts
export const InventoryLocationPropertyMap = {
brand: 'Brand',
capacity: 'Capacity',
categories: 'Categories',
childLocations: 'Child Locations',
color: 'Color',
condition: 'Condition',
container: 'Container',
description: 'Description',
id: 'ID',
items: 'Items',
material: 'Material',
name: 'Name',
parentLocation: 'Parent Location',
room: 'Room',
type: 'Type',
} as const;
export const InventoryLocationTypeMap = {
brand: 'rollup',
capacity: 'rollup',
categories: 'rollup',
childLocations: 'relation',
color: 'rollup',
condition: 'select',
container: 'relation',
description: 'formula',
id: 'unique_id',
items: 'relation',
material: 'rollup',
name: 'title',
parentLocation: 'relation',
room: 'select',
type: 'rollup',
} as const;Relations tie everything together. An item knows its location. A location knows its container type. Rollup fields pull container details into location views automatically. The result is that when you look at any item, you can read a sentence like: "Montessori Wooden Blocks (Item #63) are in the Wardrobe at Location #40 — a Transparent Plastic IKEA SAMLA 5L Box."
The AI That Sees Your Stuff
Manually cataloging every item in a house is mind-numbing. So I added image detection. Snap a photo, and the system identifies what it's looking at.
The identification runs in two deliberate steps, orchestrated by a single method:
// src/claude/claude.service.ts
async detectImage(
imageBuffer: Buffer,
mimeType: string,
existingCategories: CategoryEntry[],
): Promise<ClaudeVisionDetection> {
// Step 1: Identify the item from the image (no category context)
const identification = await this.identifyItem(imageBuffer, mimeType);
// Step 2: Categorize using the text description + category list
return this.categorizeItem(identification, existingCategories);
}Step 1 sends the image to Claude's vision API with no context about existing categories — no bias, no leading. It simply asks what is this thing?
// src/claude/claude.service.ts — identifyItem
private async identifyItem(
imageBuffer: Buffer,
mimeType: string,
): Promise<ClaudeVisionIdentification> {
const base64 = imageBuffer.toString('base64');
const mediaType = mimeType as
| 'image/jpeg'
| 'image/png'
| 'image/gif'
| 'image/webp';
const response = await this.client.messages.create({
model: this.visionModel,
max_tokens: 500,
system: `You are a household inventory assistant. Identify the primary item in the photo.
Respond with ONLY valid JSON (no markdown):
{
"primaryItem": "concise name of the item",
"description": "2-3 sentence description covering what it is, what it's made of, its typical use, and any notable features"
}`,
messages: [
{
role: 'user',
content: [
{
type: 'image',
source: { type: 'base64', media_type: mediaType, data: base64 },
},
{ type: 'text', text: 'What is this item? Describe it in detail.' },
],
},
],
});
return this.parseJsonResponse<ClaudeVisionIdentification>(response);
}Step 2 takes the identification and matches it against the existing category tree. Claude sees the full hierarchy — grouped by parent — and returns both matched categories and suggestions for new ones:
// src/claude/claude.service.ts — categorizeItem
private async categorizeItem(
identification: ClaudeVisionIdentification,
existingCategories: CategoryEntry[],
): Promise<ClaudeVisionDetection> {
const grouped = new Map<string, string[]>();
for (const c of existingCategories) {
const parent = c.category || '(uncategorized)';
const arr = grouped.get(parent) ?? [];
arr.push(c.name);
grouped.set(parent, arr);
}
const categoryList = [...grouped.entries()]
.map(
([parent, subs]) =>
`${parent}:\n${subs.map((s) => ` - ${s}`).join('\n')}`,
)
.join('\n');
const response = await this.client.messages.create({
model: this.visionModel,
max_tokens: 500,
system: `You are a household inventory assistant that categorizes items.
Given an item description and existing categories, match and suggest new ones.
Existing categories (parent > subcategory):
${categoryList}
Respond with ONLY valid JSON:
{
"matchedCategoryNames": ["exact names from the existing list"],
"suggestedNewCategories": ["Parent Category: Subcategory"]
}`,
messages: [
{
role: 'user',
content: `Categorize this item:\n\nName: ${identification.primaryItem}\nDescription: ${identification.description}`,
},
],
});
const categorization = this.parseJsonResponse<{
matchedCategoryNames: string[];
suggestedNewCategories: string[];
}>(response);
return { ...identification, ...categorization };
}This two-step separation matters. If you show the AI both the image and the categories at once, it anchors on the categories and shoehorns the identification. By splitting the steps, the visual identification stays honest.
Images are deduplicated by SHA-256 hash before processing — upload the same photo twice, and the system returns the cached result. The images themselves land in cloud storage (DigitalOcean Spaces) for CDN delivery, and each detection result is tracked in Notion with its status, so you can see what's been processed and what failed.
Finding and Suggesting
The search works the way you'd expect — full-text on names, filter by room or category, look up by ID. But the more interesting feature is location suggestion.
When you add a new item, the system finds items with overlapping categories, collects their locations, and ranks by frequency:
// src/notion/services/tool-sets/item.tool-set.ts
// For each category, find items that share it and collect their locations
const locationCounts = new Map<string, number>();
for (const category of categories) {
const response = await context.notionService.dataSources.query({
data_source_id: dsId,
page_size: 50,
filter: {
property: InventoryItemPropertyMap.categories,
relation: { contains: category },
} as never,
});
for (const page of response.results) {
if (page.object !== 'page' || !('properties' in page)) continue;
const extracted = extractPage<Record<string, unknown>>(
page.properties,
InventoryItemPropertyMap,
);
const locationIds = (extracted.location as string[]) ?? [];
for (const locId of locationIds) {
locationCounts.set(locId, (locationCounts.get(locId) ?? 0) + 1);
}
}
}
// Sort by count descending, take top 5
const topLocationIds = [...locationCounts.entries()]
.sort((a, b) => b[1] - a[1])
.slice(0, 5)
.map(([id]) => id);If most of your art supplies are in the study room closet, it'll suggest that spot for the new paintbrushes. It surfaces the top five candidates with the similar items already stored there, so you can make an informed choice.
There's also a text enhancement flow for when you don't have a photo. The same two-step pipeline, but starting from text:
// src/claude/claude.service.ts
async enhanceItemText(
rawText: string,
existingCategories: CategoryEntry[],
): Promise<ClaudeVisionDetection> {
const identification = await this.identifyText(rawText);
return this.categorizeItem(identification, existingCategories);
}
private async identifyText(
rawText: string,
): Promise<ClaudeVisionIdentification> {
const response = await this.client.messages.create({
model: 'claude-haiku-4-5-20251001',
max_tokens: 500,
system: `You are a household inventory assistant. Given a text description,
provide a canonical item name and description.
Respond with ONLY valid JSON:
{
"primaryItem": "concise canonical name",
"description": "2-3 sentence description"
}`,
messages: [{ role: 'user', content: rawText }],
});
return this.parseJsonResponse<ClaudeVisionIdentification>(response);
}Type "kids wooden blocks" and the AI normalizes it, generates a proper name and description, and fuzzy-matches it against categories — catching partial matches on both subcategories and parent categories.
The MCP Layer
The whole inventory is exposed through an MCP (Model Context Protocol) server, which means AI assistants can interact with it conversationally. Ask "where are the rain boots?" and the assistant can search the inventory, find the item, resolve its location, and tell you the room, shelf, and container — all through natural language.
The MCP tools cover the full CRUD lifecycle: creating items, updating locations, suggesting storage spots, and even triggering label prints. It turns the inventory from a passive database into something you can talk to.
Was It Worth It?
We've cataloged around 400 items so far. The re-buying has stopped. The "where is it?" conversations now have answers. And because it's Notion, my wife actually uses it — she can browse locations, check items, and add things without touching a terminal.
The AI detection saves maybe 30 seconds per item, but the real value is in categorization. It consistently matches items to categories I wouldn't have thought of, which makes future searches more useful. The system gets smarter as it grows.
If your household has the same "we own it but can't find it" problem, the tooling exists to solve it. You don't need a warehouse management system. You need a Notion workspace, a camera, and a bit of glue code.
Comments ()