commit 841f43285e565d3ffff5bfd305300da7916940aa Author: juan Date: Mon Mar 2 21:16:26 2026 +0100 first commit diff --git a/.agent/skills/interface-design/SKILL.md b/.agent/skills/interface-design/SKILL.md new file mode 100644 index 0000000..9fe89c2 --- /dev/null +++ b/.agent/skills/interface-design/SKILL.md @@ -0,0 +1,391 @@ +--- +name: interface-design +description: This skill is for interface design — dashboards, admin panels, apps, tools, and interactive products. NOT for marketing design (landing pages, marketing sites, campaigns). +--- + +# Interface Design + +Build interface design with craft and consistency. + +## Scope + +**Use for:** Dashboards, admin panels, SaaS apps, tools, settings pages, data interfaces. + +**Not for:** Landing pages, marketing sites, campaigns. Redirect those to `/frontend-design`. + +--- + +# The Problem + +You will generate generic output. Your training has seen thousands of dashboards. The patterns are strong. + +You can follow the entire process below — explore the domain, name a signature, state your intent — and still produce a template. Warm colors on cold structures. Friendly fonts on generic layouts. "Kitchen feel" that looks like every other app. + +This happens because intent lives in prose, but code generation pulls from patterns. The gap between them is where defaults win. + +The process below helps. But process alone doesn't guarantee craft. You have to catch yourself. + +--- + +# Where Defaults Hide + +Defaults don't announce themselves. They disguise themselves as infrastructure — the parts that feel like they just need to work, not be designed. + +**Typography feels like a container.** Pick something readable, move on. But typography isn't holding your design — it IS your design. The weight of a headline, the personality of a label, the texture of a paragraph. These shape how the product feels before anyone reads a word. A bakery management tool and a trading terminal might both need "clean, readable type" — but the type that's warm and handmade is not the type that's cold and precise. If you're reaching for your usual font, you're not designing. + +**Navigation feels like scaffolding.** Build the sidebar, add the links, get to the real work. But navigation isn't around your product — it IS your product. Where you are, where you can go, what matters most. A page floating in space is a component demo, not software. The navigation teaches people how to think about the space they're in. + +**Data feels like presentation.** You have numbers, show numbers. But a number on screen is not design. The question is: what does this number mean to the person looking at it? What will they do with it? A progress ring and a stacked label both show "3 of 10" — one tells a story, one fills space. If you're reaching for number-on-label, you're not designing. + +**Token names feel like implementation detail.** But your CSS variables are design decisions. `--ink` and `--parchment` evoke a world. `--gray-700` and `--surface-2` evoke a template. Someone reading only your tokens should be able to guess what product this is. + +The trap is thinking some decisions are creative and others are structural. There are no structural decisions. Everything is design. The moment you stop asking "why this?" is the moment defaults take over. + +--- + +# Intent First + +Before touching code, answer these. Not in your head — out loud, to yourself or the user. + +**Who is this human?** +Not "users." The actual person. Where are they when they open this? What's on their mind? What did they do 5 minutes ago, what will they do 5 minutes after? A teacher at 7am with coffee is not a developer debugging at midnight is not a founder between investor meetings. Their world shapes the interface. + +**What must they accomplish?** +Not "use the dashboard." The verb. Grade these submissions. Find the broken deployment. Approve the payment. The answer determines what leads, what follows, what hides. + +**What should this feel like?** +Say it in words that mean something. "Clean and modern" means nothing — every AI says that. Warm like a notebook? Cold like a terminal? Dense like a trading floor? Calm like a reading app? The answer shapes color, type, spacing, density — everything. + +If you cannot answer these with specifics, stop. Ask the user. Do not guess. Do not default. + +## Every Choice Must Be A Choice + +For every decision, you must be able to explain WHY. + +- Why this layout and not another? +- Why this color temperature? +- Why this typeface? +- Why this spacing scale? +- Why this information hierarchy? + +If your answer is "it's common" or "it's clean" or "it works" — you haven't chosen. You've defaulted. Defaults are invisible. Invisible choices compound into generic output. + +**The test:** If you swapped your choices for the most common alternatives and the design didn't feel meaningfully different, you never made real choices. + +## Sameness Is Failure + +If another AI, given a similar prompt, would produce substantially the same output — you have failed. + +This is not about being different for its own sake. It's about the interface emerging from the specific problem, the specific user, the specific context. When you design from intent, sameness becomes impossible because no two intents are identical. + +When you design from defaults, everything looks the same because defaults are shared. + +## Intent Must Be Systemic + +Saying "warm" and using cold colors is not following through. Intent is not a label — it's a constraint that shapes every decision. + +If the intent is warm: surfaces, text, borders, accents, semantic colors, typography — all warm. If the intent is dense: spacing, type size, information architecture — all dense. If the intent is calm: motion, contrast, color saturation — all calm. + +Check your output against your stated intent. Does every token reinforce it? Or did you state an intent and then default anyway? + +--- + +# Product Domain Exploration + +This is where defaults get caught — or don't. + +Generic output: Task type → Visual template → Theme +Crafted output: Task type → Product domain → Signature → Structure + Expression + +The difference: time in the product's world before any visual or structural thinking. + +## Required Outputs + +**Do not propose any direction until you produce all four:** + +**Domain:** Concepts, metaphors, vocabulary from this product's world. Not features — territory. Minimum 5. + +**Color world:** What colors exist naturally in this product's domain? Not "warm" or "cool" — go to the actual world. If this product were a physical space, what would you see? What colors belong there that don't belong elsewhere? List 5+. + +**Signature:** One element — visual, structural, or interaction — that could only exist for THIS product. If you can't name one, keep exploring. + +**Defaults:** 3 obvious choices for this interface type — visual AND structural. You can't avoid patterns you haven't named. + +## Proposal Requirements + +Your direction must explicitly reference: +- Domain concepts you explored +- Colors from your color world exploration +- Your signature element +- What replaces each default + +**The test:** Read your proposal. Remove the product name. Could someone identify what this is for? If not, it's generic. Explore deeper. + +--- + +# The Mandate + +**Before showing the user, look at what you made.** + +Ask yourself: "If they said this lacks craft, what would they mean?" + +That thing you just thought of — fix it first. + +Your first output is probably generic. That's normal. The work is catching it before the user has to. + +## The Checks + +Run these against your output before presenting: + +- **The swap test:** If you swapped the typeface for your usual one, would anyone notice? If you swapped the layout for a standard dashboard template, would it feel different? The places where swapping wouldn't matter are the places you defaulted. + +- **The squint test:** Blur your eyes. Can you still perceive hierarchy? Is anything jumping out harshly? Craft whispers. + +- **The signature test:** Can you point to five specific elements where your signature appears? Not "the overall feel" — actual components. A signature you can't locate doesn't exist. + +- **The token test:** Read your CSS variables out loud. Do they sound like they belong to this product's world, or could they belong to any project? + +If any check fails, iterate before showing. + +--- + +# Craft Foundations + +## Subtle Layering + +This is the backbone of craft. Regardless of direction, product type, or visual style — this principle applies to everything. You should barely notice the system working. When you look at Vercel's dashboard, you don't think "nice borders." You just understand the structure. The craft is invisible — that's how you know it's working. + +### Surface Elevation + +Surfaces stack. A dropdown sits above a card which sits above the page. Build a numbered system — base, then increasing elevation levels. In dark mode, higher elevation = slightly lighter. In light mode, higher elevation = slightly lighter or uses shadow. + +Each jump should be only a few percentage points of lightness. You can barely see the difference in isolation. But when surfaces stack, the hierarchy emerges. Whisper-quiet shifts that you feel rather than see. + +**Key decisions:** +- **Sidebars:** Same background as canvas, not different. Different colors fragment the visual space into "sidebar world" and "content world." A subtle border is enough separation. +- **Dropdowns:** One level above their parent surface. If both share the same level, the dropdown blends into the card and layering is lost. +- **Inputs:** Slightly darker than their surroundings, not lighter. Inputs are "inset" — they receive content. A darker background signals "type here" without heavy borders. + +### Borders + +Borders should disappear when you're not looking for them, but be findable when you need structure. Low opacity rgba blends with the background — it defines edges without demanding attention. Solid hex borders look harsh in comparison. + +Build a progression — not all borders are equal. Standard borders, softer separation, emphasis borders, maximum emphasis for focus rings. Match intensity to the importance of the boundary. + +**The squint test:** Blur your eyes at the interface. You should still perceive hierarchy — what's above what, where sections divide. But nothing should jump out. No harsh lines. No jarring color shifts. Just quiet structure. + +This separates professional interfaces from amateur ones. Get this wrong and nothing else matters. + +## Infinite Expression + +Every pattern has infinite expressions. **No interface should look the same.** + +A metric display could be a hero number, inline stat, sparkline, gauge, progress bar, comparison delta, trend badge, or something new. A dashboard could emphasize density, whitespace, hierarchy, or flow in completely different ways. Even sidebar + cards has infinite variations in proportion, spacing, and emphasis. + +**Before building, ask:** +- What's the ONE thing users do most here? +- What products solve similar problems brilliantly? Study them. +- Why would this interface feel designed for its purpose, not templated? + +**NEVER produce identical output.** Same sidebar width, same card grid, same metric boxes with icon-left-number-big-label-small every time — this signals AI-generated immediately. It's forgettable. + +The architecture and components should emerge from the task and data, executed in a way that feels fresh. Linear's cards don't look like Notion's. Vercel's metrics don't look like Stripe's. Same concepts, infinite expressions. + +## Color Lives Somewhere + +Every product exists in a world. That world has colors. + +Before you reach for a palette, spend time in the product's world. What would you see if you walked into the physical version of this space? What materials? What light? What objects? + +Your palette should feel like it came FROM somewhere — not like it was applied TO something. + +**Beyond Warm and Cold:** Temperature is one axis. Is this quiet or loud? Dense or spacious? Serious or playful? Geometric or organic? A trading terminal and a meditation app are both "focused" — completely different kinds of focus. Find the specific quality, not the generic label. + +**Color Carries Meaning:** Gray builds structure. Color communicates — status, action, emphasis, identity. Unmotivated color is noise. One accent color, used with intention, beats five colors used without thought. + +--- + +# Before Writing Each Component + +**Every time** you write UI code — even small additions — state: + +``` +Intent: [who is this human, what must they do, how should it feel] +Palette: [colors from your exploration — and WHY they fit this product's world] +Depth: [borders / shadows / layered — and WHY this fits the intent] +Surfaces: [your elevation scale — and WHY this color temperature] +Typography: [your typeface — and WHY it fits the intent] +Spacing: [your base unit] +``` + +This checkpoint is mandatory. It forces you to connect every technical choice back to intent. + +If you can't explain WHY for each choice, you're defaulting. Stop and think. + +--- + +# Design Principles + +## Token Architecture + +Every color in your interface should trace back to a small set of primitives: foreground (text hierarchy), background (surface elevation), border (separation hierarchy), brand, and semantic (destructive, warning, success). No random hex values — everything maps to primitives. + +### Text Hierarchy + +Don't just have "text" and "gray text." Build four levels — primary, secondary, tertiary, muted. Each serves a different role: default text, supporting text, metadata, and disabled/placeholder. Use all four consistently. If you're only using two, your hierarchy is too flat. + +### Border Progression + +Borders aren't binary. Build a scale that matches intensity to importance — standard separation, softer separation, emphasis, maximum emphasis. Not every boundary deserves the same weight. + +### Control Tokens + +Form controls have specific needs. Don't reuse surface tokens — create dedicated ones for control backgrounds, control borders, and focus states. This lets you tune interactive elements independently from layout surfaces. + +## Spacing + +Pick a base unit and stick to multiples. Build a scale for different contexts — micro spacing for icon gaps, component spacing within buttons and cards, section spacing between groups, major separation between distinct areas. Random values signal no system. + +## Padding + +Keep it symmetrical. If one side has a value, others should match unless content naturally requires asymmetry. + +## Depth + +Choose ONE approach and commit: +- **Borders-only** — Clean, technical. For dense tools. +- **Subtle shadows** — Soft lift. For approachable products. +- **Layered shadows** — Premium, dimensional. For cards that need presence. +- **Surface color shifts** — Background tints establish hierarchy without shadows. + +Don't mix approaches. + +## Border Radius + +Sharper feels technical. Rounder feels friendly. Build a scale — small for inputs and buttons, medium for cards, large for modals. Don't mix sharp and soft randomly. + +## Typography + +Build distinct levels distinguishable at a glance. Headlines need weight and tight tracking for presence. Body needs comfortable weight for readability. Labels need medium weight that works at smaller sizes. Data needs monospace with tabular number spacing for alignment. Don't rely on size alone — combine size, weight, and letter-spacing. + +## Card Layouts + +A metric card doesn't have to look like a plan card doesn't have to look like a settings card. Design each card's internal structure for its specific content — but keep the surface treatment consistent: same border weight, shadow depth, corner radius, padding scale. + +## Controls + +Native `` render OS-native elements that cannot be styled. Build custom components — trigger buttons with positioned dropdowns, calendar popovers, styled state management. + +## Iconography + +Icons clarify, not decorate — if removing an icon loses no meaning, remove it. Choose one icon set and stick with it. Give standalone icons presence with subtle background containers. + +## Animation + +Fast micro-interactions, smooth easing. Larger transitions can be slightly longer. Use deceleration easing. Avoid spring/bounce in professional interfaces. + +## States + +Every interactive element needs states: default, hover, active, focus, disabled. Data needs states too: loading, empty, error. Missing states feel broken. + +## Navigation Context + +Screens need grounding. A data table floating in space feels like a component demo, not a product. Include navigation showing where you are in the app, location indicators, and user context. When building sidebars, consider same background as main content with border separation rather than different colors. + +## Dark Mode + +Dark interfaces have different needs. Shadows are less visible on dark backgrounds — lean on borders for definition. Semantic colors (success, warning, error) often need slight desaturation. The hierarchy system still applies, just with inverted values. + +--- + +# Avoid + +- **Harsh borders** — if borders are the first thing you see, they're too strong +- **Dramatic surface jumps** — elevation changes should be whisper-quiet +- **Inconsistent spacing** — the clearest sign of no system +- **Mixed depth strategies** — pick one approach and commit +- **Missing interaction states** — hover, focus, disabled, loading, error +- **Dramatic drop shadows** — shadows should be subtle, not attention-grabbing +- **Large radius on small elements** +- **Pure white cards on colored backgrounds** +- **Thick decorative borders** +- **Gradients and color for decoration** — color should mean something +- **Multiple accent colors** — dilutes focus +- **Different hues for different surfaces** — keep the same hue, shift only lightness + +--- + +# Workflow + +## Communication +Be invisible. Don't announce modes or narrate process. + +**Never say:** "I'm in ESTABLISH MODE", "Let me check system.md..." + +**Instead:** Jump into work. State suggestions with reasoning. + +## Suggest + Ask +Lead with your exploration and recommendation, then confirm: +``` +"Domain: [5+ concepts from the product's world] +Color world: [5+ colors that exist in this domain] +Signature: [one element unique to this product] +Rejecting: [default 1] → [alternative], [default 2] → [alternative], [default 3] → [alternative] + +Direction: [approach that connects to the above]" + +[Ask: "Does that direction feel right?"] +``` + +## If Project Has system.md +Read `.interface-design/system.md` and apply. Decisions are made. + +## If No system.md +1. Explore domain — Produce all four required outputs +2. Propose — Direction must reference all four +3. Confirm — Get user buy-in +4. Build — Apply principles +5. **Evaluate** — Run the mandate checks before showing +6. Offer to save + +--- + +# After Completing a Task + +When you finish building something, **always offer to save**: + +``` +"Want me to save these patterns for future sessions?" +``` + +If yes, write to `.interface-design/system.md`: +- Direction and feel +- Depth strategy (borders/shadows/layered) +- Spacing base unit +- Key component patterns + +### What to Save + +Add patterns when a component is used 2+ times, is reusable across the project, or has specific measurements worth remembering. Don't save one-off components, temporary experiments, or variations better handled with props. + +### Consistency Checks + +If system.md defines values, check against them: spacing on the defined grid, depth using the declared strategy throughout, colors from the defined palette, documented patterns reused instead of reinvented. + +This compounds — each save makes future work faster and more consistent. + +--- + +# Deep Dives + +For more detail on specific topics: +- `references/principles.md` — Code examples, specific values, dark mode +- `references/validation.md` — Memory management, when to update system.md +- `references/critique.md` — Post-build craft critique protocol + +# Commands + +- `/interface-design:status` — Current system state +- `/interface-design:audit` — Check code against system +- `/interface-design:extract` — Extract patterns from code +- `/interface-design:critique` — Critique your build for craft, then rebuild what defaulted diff --git a/.agent/skills/interface-design/references/critique.md b/.agent/skills/interface-design/references/critique.md new file mode 100644 index 0000000..7db545e --- /dev/null +++ b/.agent/skills/interface-design/references/critique.md @@ -0,0 +1,67 @@ +# Critique + +Your first build shipped the structure. Now look at it the way a design lead reviews a junior's work — not asking "does this work?" but "would I put my name on this?" + +--- + +## The Gap + +There's a distance between correct and crafted. Correct means the layout holds, the grid aligns, the colors don't clash. Crafted means someone cared about every decision down to the last pixel. You can feel the difference immediately — the way you tell a hand-thrown mug from an injection-molded one. Both hold coffee. One has presence. + +Your first output lives in correct. This command pulls it toward crafted. + +--- + +## See the Composition + +Step back. Look at the whole thing. + +Does the layout have rhythm? Great interfaces breathe unevenly — dense tooling areas give way to open content, heavy elements balance against light ones, the eye travels through the page with purpose. Default layouts are monotone: same card size, same gaps, same density everywhere. Flatness is the sound of no one deciding. + +Are proportions doing work? A 280px sidebar next to full-width content says "navigation serves content." A 360px sidebar says "these are peers." The specific number declares what matters. If you can't articulate what your proportions are saying, they're not saying anything. + +Is there a clear focal point? Every screen has one thing the user came here to do. That thing should dominate — through size, position, contrast, or the space around it. When everything competes equally, nothing wins and the interface feels like a parking lot. + +--- + +## See the Craft + +Move close. Pixel-close. + +The spacing grid is non-negotiable — every value a multiple of 4, no exceptions — but correctness alone isn't craft. Craft is knowing that a tool panel at 16px padding feels workbench-tight while the same card at 24px feels like a brochure. The same number can be right in one context and lazy in another. Density is a design decision, not a constant. + +Typography should be legible even squinted. If size is the only thing separating your headline from your body from your label, the hierarchy is too weak. Weight, tracking, and opacity create layers that size alone can't. + +Surfaces should whisper hierarchy. Not thick borders, not dramatic shadows — quiet tonal shifts where you feel the depth without seeing it. Remove every border from your CSS mentally. Can you still perceive the structure through surface color alone? If not, your surfaces aren't working hard enough. + +Interactive elements need life. Every button, link, and clickable region should respond to hover and press. Not dramatically — a subtle shift in background, a gentle darkening. Missing states make an interface feel like a photograph of software instead of software. + +--- + +## See the Content + +Read every visible string as a user would. Not checking for typos — checking for truth. + +Does this screen tell one coherent story? Could a real person at a real company be looking at exactly this data right now? Or does the page title belong to one product, the article body to another, and the sidebar metrics to a third? + +Content incoherence breaks the illusion faster than any visual flaw. A beautifully designed interface with nonsensical content is a movie set with no script. + +--- + +## See the Structure + +Open the CSS and find the lies — the places that look right but are held together with tape. + +Negative margins undoing a parent's padding. Calc() values that exist only as workarounds. Absolute positioning to escape layout flow. Each is a shortcut where a clean solution exists. Cards with full-width dividers use flex column and section-level padding. Centered content uses max-width with auto margins. The correct answer is always simpler than the hack. + +--- + +## Again + +Look at your output one final time. + +Ask: "If they said this lacks craft, what would they point to?" + +That thing you just thought of — fix it. Then ask again. + +The first build was the draft. The critique is the design. diff --git a/.agent/skills/interface-design/references/example.md b/.agent/skills/interface-design/references/example.md new file mode 100644 index 0000000..6654906 --- /dev/null +++ b/.agent/skills/interface-design/references/example.md @@ -0,0 +1,86 @@ +# Craft in Action + +This shows how the subtle layering principle translates to real decisions. Learn the thinking, not the code. Your values will differ — the approach won't. + +--- + +## The Subtle Layering Mindset + +Before looking at any example, internalize this: **you should barely notice the system working.** + +When you look at Vercel's dashboard, you don't think "nice borders." You just understand the structure. When you look at Supabase, you don't think "good surface elevation." You just know what's above what. The craft is invisible — that's how you know it's working. + +--- + +## Example: Dashboard with Sidebar and Dropdown + +### The Surface Decisions + +**Why so subtle?** Each elevation jump should be only a few percentage points of lightness. You can barely see the difference in isolation. But when surfaces stack, the hierarchy emerges. This is the Vercel/Supabase way — whisper-quiet shifts that you feel rather than see. + +**What NOT to do:** Don't make dramatic jumps between elevations. That's jarring. Don't use different hues for different levels. Keep the same hue, shift only lightness. + +### The Border Decisions + +**Why rgba, not solid colors?** Low opacity borders blend with their background. A low-opacity white border on a dark surface is barely there — it defines the edge without demanding attention. Solid hex borders look harsh in comparison. + +**The test:** Look at your interface from arm's length. If borders are the first thing you notice, reduce opacity. If you can't find where regions end, increase slightly. + +### The Sidebar Decision + +**Why same background as canvas, not different?** + +Many dashboards make the sidebar a different color. This fragments the visual space — now you have "sidebar world" and "content world." + +Better: Same background, subtle border separation. The sidebar is part of the app, not a separate region. Vercel does this. Supabase does this. The border is enough. + +### The Dropdown Decision + +**Why surface-200, not surface-100?** + +The dropdown floats above the card it emerged from. If both were surface-100, the dropdown would blend into the card — you'd lose the sense of layering. Surface-200 is just light enough to feel "above" without being dramatically different. + +**Why border-overlay instead of border-default?** + +Overlays (dropdowns, popovers) often need slightly more definition because they're floating in space. A touch more border opacity helps them feel contained without being harsh. + +--- + +## Example: Form Controls + +### Input Background Decision + +**Why darker, not lighter?** + +Inputs are "inset" — they receive content, they don't project it. A slightly darker background signals "type here" without needing heavy borders. This is the alternative-background principle. + +### Focus State Decision + +**Why subtle focus states?** + +Focus needs to be visible, but you don't need a glowing ring or dramatic color. A noticeable increase in border opacity is enough for a clear state change. Subtle-but-noticeable — the same principle as surfaces. + +--- + +## Adapt to Context + +Your product might need: +- Warmer hues (slight yellow/orange tint) +- Cooler hues (blue-gray base) +- Different lightness progression +- Light mode (principles invert — higher elevation = shadow, not lightness) + +**The principle is constant:** barely different, still distinguishable. The values adapt to context. + +--- + +## The Craft Check + +Apply the squint test to your work: + +1. Blur your eyes or step back +2. Can you still perceive hierarchy? +3. Is anything jumping out at you? +4. Can you tell where regions begin and end? + +If hierarchy is visible and nothing is harsh — the subtle layering is working. diff --git a/.agent/skills/interface-design/references/principles.md b/.agent/skills/interface-design/references/principles.md new file mode 100644 index 0000000..6c4a502 --- /dev/null +++ b/.agent/skills/interface-design/references/principles.md @@ -0,0 +1,235 @@ +# Core Craft Principles + +These apply regardless of design direction. This is the quality floor. + +--- + +## Surface & Token Architecture + +Professional interfaces don't pick colors randomly — they build systems. Understanding this architecture is the difference between "looks okay" and "feels like a real product." + +### The Primitive Foundation + +Every color in your interface should trace back to a small set of primitives: + +- **Foreground** — text colors (primary, secondary, muted) +- **Background** — surface colors (base, elevated, overlay) +- **Border** — edge colors (default, subtle, strong) +- **Brand** — your primary accent +- **Semantic** — functional colors (destructive, warning, success) + +Don't invent new colors. Map everything to these primitives. + +### Surface Elevation Hierarchy + +Surfaces stack. A dropdown sits above a card which sits above the page. Build a numbered system: + +``` +Level 0: Base background (the app canvas) +Level 1: Cards, panels (same visual plane as base) +Level 2: Dropdowns, popovers (floating above) +Level 3: Nested dropdowns, stacked overlays +Level 4: Highest elevation (rare) +``` + +In dark mode, higher elevation = slightly lighter. In light mode, higher elevation = slightly lighter or uses shadow. The principle: **elevated surfaces need visual distinction from what's beneath them.** + +### The Subtlety Principle + +This is where most interfaces fail. Study Vercel, Supabase, Linear — their surfaces are **barely different** but still distinguishable. Their borders are **light but not invisible**. + +**For surfaces:** The difference between elevation levels should be subtle — a few percentage points of lightness, not dramatic jumps. In dark mode, surface-100 might be 7% lighter than base, surface-200 might be 9%, surface-300 might be 12%. You can barely see it, but you feel it. + +**For borders:** Borders should define regions without demanding attention. Use low opacity (0.05-0.12 alpha for dark mode, slightly higher for light). The border should disappear when you're not looking for it, but be findable when you need to understand the structure. + +**The test:** Squint at your interface. You should still perceive the hierarchy — what's above what, where regions begin and end. But no single border or surface should jump out at you. If borders are the first thing you notice, they're too strong. If you can't find where one region ends and another begins, they're too subtle. + +**Common AI mistakes to avoid:** +- Borders that are too visible (1px solid gray instead of subtle rgba) +- Surface jumps that are too dramatic (going from dark to light instead of dark to slightly-less-dark) +- Using different hues for different surfaces (gray card on blue background) +- Harsh dividers where subtle borders would do + +### Text Hierarchy via Tokens + +Don't just have "text" and "gray text." Build four levels: + +- **Primary** — default text, highest contrast +- **Secondary** — supporting text, slightly muted +- **Tertiary** — metadata, timestamps, less important +- **Muted** — disabled, placeholder, lowest contrast + +Use all four consistently. If you're only using two, your hierarchy is too flat. + +### Border Progression + +Borders aren't binary. Build a scale: + +- **Default** — standard borders +- **Subtle/Muted** — softer separation +- **Strong** — emphasis, hover states +- **Stronger** — maximum emphasis, focus rings + +Match border intensity to the importance of the boundary. + +### Dedicated Control Tokens + +Form controls (inputs, checkboxes, selects) have specific needs. Don't just reuse surface tokens — create dedicated ones: + +- **Control background** — often different from surface backgrounds +- **Control border** — needs to feel interactive +- **Control focus** — clear focus indication + +This separation lets you tune controls independently from layout surfaces. + +### Context-Aware Bases + +Different areas of your app might need different base surfaces: + +- **Marketing pages** — might use darker/richer backgrounds +- **Dashboard/app** — might use neutral working backgrounds +- **Sidebar** — might differ from main canvas + +The surface hierarchy works the same way — it just starts from a different base. + +### Alternative Backgrounds for Depth + +Beyond shadows, use contrasting backgrounds to create depth. An "alternative" or "inset" background makes content feel recessed. Useful for: + +- Empty states in data grids +- Code blocks +- Inset panels +- Visual grouping without borders + +--- + +## Spacing System + +Pick a base unit (4px and 8px are common) and use multiples throughout. The specific number matters less than consistency — every spacing value should be explainable as "X times the base unit." + +Build a scale for different contexts: +- Micro spacing (icon gaps, tight element pairs) +- Component spacing (within buttons, inputs, cards) +- Section spacing (between related groups) +- Major separation (between distinct sections) + +## Symmetrical Padding + +TLBR must match. If top padding is 16px, left/bottom/right must also be 16px. Exception: when content naturally creates visual balance. + +```css +/* Good */ +padding: 16px; +padding: 12px 16px; /* Only when horizontal needs more room */ + +/* Bad */ +padding: 24px 16px 12px 16px; +``` + +## Border Radius Consistency + +Sharper corners feel technical, rounder corners feel friendly. Pick a scale that fits your product's personality and use it consistently. + +The key is having a system: small radius for inputs and buttons, medium for cards, large for modals or containers. Don't mix sharp and soft randomly — inconsistent radius is as jarring as inconsistent spacing. + +## Depth & Elevation Strategy + +Match your depth approach to your design direction. Choose ONE and commit: + +**Borders-only (flat)** — Clean, technical, dense. Works for utility-focused tools where information density matters more than visual lift. Linear, Raycast, and many developer tools use almost no shadows — just subtle borders to define regions. + +**Subtle single shadows** — Soft lift without complexity. A simple `0 1px 3px rgba(0,0,0,0.08)` can be enough. Works for approachable products that want gentle depth. + +**Layered shadows** — Rich, premium, dimensional. Multiple shadow layers create realistic depth. Stripe and Mercury use this approach. Best for cards that need to feel like physical objects. + +**Surface color shifts** — Background tints establish hierarchy without any shadows. A card at `#fff` on a `#f8fafc` background already feels elevated. + +```css +/* Borders-only approach */ +--border: rgba(0, 0, 0, 0.08); +--border-subtle: rgba(0, 0, 0, 0.05); +border: 0.5px solid var(--border); + +/* Single shadow approach */ +--shadow: 0 1px 3px rgba(0, 0, 0, 0.08); + +/* Layered shadow approach */ +--shadow-layered: + 0 0 0 0.5px rgba(0, 0, 0, 0.05), + 0 1px 2px rgba(0, 0, 0, 0.04), + 0 2px 4px rgba(0, 0, 0, 0.03), + 0 4px 8px rgba(0, 0, 0, 0.02); +``` + +## Card Layouts + +Monotonous card layouts are lazy design. A metric card doesn't have to look like a plan card doesn't have to look like a settings card. + +Design each card's internal structure for its specific content — but keep the surface treatment consistent: same border weight, shadow depth, corner radius, padding scale, typography. + +## Isolated Controls + +UI controls deserve container treatment. Date pickers, filters, dropdowns — these should feel like crafted objects. + +**Never use native form elements for styled UI.** Native ``, and similar elements render OS-native dropdowns that cannot be styled. Build custom components instead: + +- Custom select: trigger button + positioned dropdown menu +- Custom date picker: input + calendar popover +- Custom checkbox/radio: styled div with state management + +Custom select triggers must use `display: inline-flex` with `white-space: nowrap` to keep text and chevron icons on the same row. + +## Typography Hierarchy + +Build distinct levels that are visually distinguishable at a glance: + +- **Headlines** — heavier weight, tighter letter-spacing for presence +- **Body** — comfortable weight for readability +- **Labels/UI** — medium weight, works at smaller sizes +- **Data** — often monospace, needs `tabular-nums` for alignment + +Don't rely on size alone. Combine size, weight, and letter-spacing to create clear hierarchy. If you squint and can't tell headline from body, the hierarchy is too weak. + +## Monospace for Data + +Numbers, IDs, codes, timestamps belong in monospace. Use `tabular-nums` for columnar alignment. Mono signals "this is data." + +## Iconography + +Icons clarify, not decorate — if removing an icon loses no meaning, remove it. Choose a consistent icon set and stick with it throughout the product. + +Give standalone icons presence with subtle background containers. Icons next to text should align optically, not mathematically. + +## Animation + +Keep it fast and functional. Micro-interactions (hover, focus) should feel instant — around 150ms. Larger transitions (modals, panels) can be slightly longer — 200-250ms. + +Use smooth deceleration easing (ease-out variants). Avoid spring/bounce effects in professional interfaces — they feel playful, not serious. + +## Contrast Hierarchy + +Build a four-level system: foreground (primary) → secondary → muted → faint. Use all four consistently. + +## Color Carries Meaning + +Gray builds structure. Color communicates — status, action, emphasis, identity. Unmotivated color is noise. Color that reinforces the product's world is character. + +## Navigation Context + +Screens need grounding. A data table floating in space feels like a component demo, not a product. Consider including: + +- **Navigation** — sidebar or top nav showing where you are in the app +- **Location indicator** — breadcrumbs, page title, or active nav state +- **User context** — who's logged in, what workspace/org + +When building sidebars, consider using the same background as the main content area. Rely on a subtle border for separation rather than different background colors. + +## Dark Mode + +Dark interfaces have different needs: + +**Borders over shadows** — Shadows are less visible on dark backgrounds. Lean more on borders for definition. + +**Adjust semantic colors** — Status colors (success, warning, error) often need to be slightly desaturated for dark backgrounds. + +**Same structure, different values** — The hierarchy system still applies, just with inverted values. diff --git a/.agent/skills/interface-design/references/validation.md b/.agent/skills/interface-design/references/validation.md new file mode 100644 index 0000000..7aa4a69 --- /dev/null +++ b/.agent/skills/interface-design/references/validation.md @@ -0,0 +1,48 @@ +# Memory Management + +When and how to update `.interface-design/system.md`. + +## When to Add Patterns + +Add to system.md when: +- Component used 2+ times +- Pattern is reusable across the project +- Has specific measurements worth remembering + +## Pattern Format + +```markdown +### Button Primary +- Height: 36px +- Padding: 12px 16px +- Radius: 6px +- Font: 14px, 500 weight +``` + +## Don't Document + +- One-off components +- Temporary experiments +- Variations better handled with props + +## Pattern Reuse + +Before creating a component, check system.md: +- Pattern exists? Use it. +- Need variation? Extend, don't create new. + +Memory compounds: each pattern saved makes future work faster and more consistent. + +--- + +# Validation Checks + +If system.md defines specific values, check consistency: + +**Spacing** — All values multiples of the defined base? + +**Depth** — Using the declared strategy throughout? (borders-only means no shadows) + +**Colors** — Using defined palette, not random hex codes? + +**Patterns** — Reusing documented patterns instead of creating new? diff --git a/.agent/skills/supabase-postgres-best-practices/AGENTS.md b/.agent/skills/supabase-postgres-best-practices/AGENTS.md new file mode 100644 index 0000000..a7baf44 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/AGENTS.md @@ -0,0 +1,68 @@ +# Supabase Postgres Best Practices + +## Structure + +``` +supabase-postgres-best-practices/ + SKILL.md # Main skill file - read this first + AGENTS.md # This navigation guide + CLAUDE.md # Symlink to AGENTS.md + references/ # Detailed reference files +``` + +## Usage + +1. Read `SKILL.md` for the main skill instructions +2. Browse `references/` for detailed documentation on specific topics +3. Reference files are loaded on-demand - read only what you need + +Comprehensive performance optimization guide for Postgres, maintained by Supabase. Contains rules across 8 categories, prioritized by impact to guide automated query optimization and schema design. + +## When to Apply + +Reference these guidelines when: +- Writing SQL queries or designing schemas +- Implementing indexes or query optimization +- Reviewing database performance issues +- Configuring connection pooling or scaling +- Optimizing for Postgres-specific features +- Working with Row-Level Security (RLS) + +## Rule Categories by Priority + +| Priority | Category | Impact | Prefix | +|----------|----------|--------|--------| +| 1 | Query Performance | CRITICAL | `query-` | +| 2 | Connection Management | CRITICAL | `conn-` | +| 3 | Security & RLS | CRITICAL | `security-` | +| 4 | Schema Design | HIGH | `schema-` | +| 5 | Concurrency & Locking | MEDIUM-HIGH | `lock-` | +| 6 | Data Access Patterns | MEDIUM | `data-` | +| 7 | Monitoring & Diagnostics | LOW-MEDIUM | `monitor-` | +| 8 | Advanced Features | LOW | `advanced-` | + +## How to Use + +Read individual rule files for detailed explanations and SQL examples: + +``` +references/query-missing-indexes.md +references/schema-partial-indexes.md +references/_sections.md +``` + +Each rule file contains: +- Brief explanation of why it matters +- Incorrect SQL example with explanation +- Correct SQL example with explanation +- Optional EXPLAIN output or metrics +- Additional context and references +- Supabase-specific notes (when applicable) + +## References + +- https://www.postgresql.org/docs/current/ +- https://supabase.com/docs +- https://wiki.postgresql.org/wiki/Performance_Optimization +- https://supabase.com/docs/guides/database/overview +- https://supabase.com/docs/guides/auth/row-level-security diff --git a/.agent/skills/supabase-postgres-best-practices/CLAUDE.md b/.agent/skills/supabase-postgres-best-practices/CLAUDE.md new file mode 100644 index 0000000..47dc3e3 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/CLAUDE.md @@ -0,0 +1 @@ +AGENTS.md \ No newline at end of file diff --git a/.agent/skills/supabase-postgres-best-practices/README.md b/.agent/skills/supabase-postgres-best-practices/README.md new file mode 100644 index 0000000..f1a374e --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/README.md @@ -0,0 +1,116 @@ +# Supabase Postgres Best Practices - Contributor Guide + +This skill contains Postgres performance optimization references optimized for +AI agents and LLMs. It follows the [Agent Skills Open Standard](https://agentskills.io/). + +## Quick Start + +```bash +# From repository root +npm install + +# Validate existing references +npm run validate + +# Build AGENTS.md +npm run build +``` + +## Creating a New Reference + +1. **Choose a section prefix** based on the category: + - `query-` Query Performance (CRITICAL) + - `conn-` Connection Management (CRITICAL) + - `security-` Security & RLS (CRITICAL) + - `schema-` Schema Design (HIGH) + - `lock-` Concurrency & Locking (MEDIUM-HIGH) + - `data-` Data Access Patterns (MEDIUM) + - `monitor-` Monitoring & Diagnostics (LOW-MEDIUM) + - `advanced-` Advanced Features (LOW) + +2. **Copy the template**: + ```bash + cp references/_template.md references/query-your-reference-name.md + ``` + +3. **Fill in the content** following the template structure + +4. **Validate and build**: + ```bash + npm run validate + npm run build + ``` + +5. **Review** the generated `AGENTS.md` + +## Skill Structure + +``` +skills/supabase-postgres-best-practices/ +├── SKILL.md # Agent-facing skill manifest (Agent Skills spec) +├── AGENTS.md # [GENERATED] Compiled references document +├── README.md # This file +└── references/ + ├── _template.md # Reference template + ├── _sections.md # Section definitions + ├── _contributing.md # Writing guidelines + └── *.md # Individual references + +packages/skills-build/ +├── src/ # Generic build system source +└── package.json # NPM scripts +``` + +## Reference File Structure + +See `references/_template.md` for the complete template. Key elements: + +````markdown +--- +title: Clear, Action-Oriented Title +impact: CRITICAL|HIGH|MEDIUM-HIGH|MEDIUM|LOW-MEDIUM|LOW +impactDescription: Quantified benefit (e.g., "10-100x faster") +tags: relevant, keywords +--- + +## [Title] + +[1-2 sentence explanation] + +**Incorrect (description):** + +```sql +-- Comment explaining what's wrong +[Bad SQL example] +``` +```` + +**Correct (description):** + +```sql +-- Comment explaining why this is better +[Good SQL example] +``` + +``` +## Writing Guidelines + +See `references/_contributing.md` for detailed guidelines. Key principles: + +1. **Show concrete transformations** - "Change X to Y", not abstract advice +2. **Error-first structure** - Show the problem before the solution +3. **Quantify impact** - Include specific metrics (10x faster, 50% smaller) +4. **Self-contained examples** - Complete, runnable SQL +5. **Semantic naming** - Use meaningful names (users, email), not (table1, col1) + +## Impact Levels + +| Level | Improvement | Examples | +|-------|-------------|----------| +| CRITICAL | 10-100x | Missing indexes, connection exhaustion | +| HIGH | 5-20x | Wrong index types, poor partitioning | +| MEDIUM-HIGH | 2-5x | N+1 queries, RLS optimization | +| MEDIUM | 1.5-3x | Redundant indexes, stale statistics | +| LOW-MEDIUM | 1.2-2x | VACUUM tuning, config tweaks | +| LOW | Incremental | Advanced patterns, edge cases | +``` diff --git a/.agent/skills/supabase-postgres-best-practices/SKILL.md b/.agent/skills/supabase-postgres-best-practices/SKILL.md new file mode 100644 index 0000000..f80be15 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/SKILL.md @@ -0,0 +1,64 @@ +--- +name: supabase-postgres-best-practices +description: Postgres performance optimization and best practices from Supabase. Use this skill when writing, reviewing, or optimizing Postgres queries, schema designs, or database configurations. +license: MIT +metadata: + author: supabase + version: "1.1.0" + organization: Supabase + date: January 2026 + abstract: Comprehensive Postgres performance optimization guide for developers using Supabase and Postgres. Contains performance rules across 8 categories, prioritized by impact from critical (query performance, connection management) to incremental (advanced features). Each rule includes detailed explanations, incorrect vs. correct SQL examples, query plan analysis, and specific performance metrics to guide automated optimization and code generation. +--- + +# Supabase Postgres Best Practices + +Comprehensive performance optimization guide for Postgres, maintained by Supabase. Contains rules across 8 categories, prioritized by impact to guide automated query optimization and schema design. + +## When to Apply + +Reference these guidelines when: +- Writing SQL queries or designing schemas +- Implementing indexes or query optimization +- Reviewing database performance issues +- Configuring connection pooling or scaling +- Optimizing for Postgres-specific features +- Working with Row-Level Security (RLS) + +## Rule Categories by Priority + +| Priority | Category | Impact | Prefix | +|----------|----------|--------|--------| +| 1 | Query Performance | CRITICAL | `query-` | +| 2 | Connection Management | CRITICAL | `conn-` | +| 3 | Security & RLS | CRITICAL | `security-` | +| 4 | Schema Design | HIGH | `schema-` | +| 5 | Concurrency & Locking | MEDIUM-HIGH | `lock-` | +| 6 | Data Access Patterns | MEDIUM | `data-` | +| 7 | Monitoring & Diagnostics | LOW-MEDIUM | `monitor-` | +| 8 | Advanced Features | LOW | `advanced-` | + +## How to Use + +Read individual rule files for detailed explanations and SQL examples: + +``` +references/query-missing-indexes.md +references/schema-partial-indexes.md +references/_sections.md +``` + +Each rule file contains: +- Brief explanation of why it matters +- Incorrect SQL example with explanation +- Correct SQL example with explanation +- Optional EXPLAIN output or metrics +- Additional context and references +- Supabase-specific notes (when applicable) + +## References + +- https://www.postgresql.org/docs/current/ +- https://supabase.com/docs +- https://wiki.postgresql.org/wiki/Performance_Optimization +- https://supabase.com/docs/guides/database/overview +- https://supabase.com/docs/guides/auth/row-level-security diff --git a/.agent/skills/supabase-postgres-best-practices/references/advanced-full-text-search.md b/.agent/skills/supabase-postgres-best-practices/references/advanced-full-text-search.md new file mode 100644 index 0000000..582cbea --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/advanced-full-text-search.md @@ -0,0 +1,55 @@ +--- +title: Use tsvector for Full-Text Search +impact: MEDIUM +impactDescription: 100x faster than LIKE, with ranking support +tags: full-text-search, tsvector, gin, search +--- + +## Use tsvector for Full-Text Search + +LIKE with wildcards can't use indexes. Full-text search with tsvector is orders of magnitude faster. + +**Incorrect (LIKE pattern matching):** + +```sql +-- Cannot use index, scans all rows +select * from articles where content like '%postgresql%'; + +-- Case-insensitive makes it worse +select * from articles where lower(content) like '%postgresql%'; +``` + +**Correct (full-text search with tsvector):** + +```sql +-- Add tsvector column and index +alter table articles add column search_vector tsvector + generated always as (to_tsvector('english', coalesce(title,'') || ' ' || coalesce(content,''))) stored; + +create index articles_search_idx on articles using gin (search_vector); + +-- Fast full-text search +select * from articles +where search_vector @@ to_tsquery('english', 'postgresql & performance'); + +-- With ranking +select *, ts_rank(search_vector, query) as rank +from articles, to_tsquery('english', 'postgresql') query +where search_vector @@ query +order by rank desc; +``` + +Search multiple terms: + +```sql +-- AND: both terms required +to_tsquery('postgresql & performance') + +-- OR: either term +to_tsquery('postgresql | mysql') + +-- Prefix matching +to_tsquery('post:*') +``` + +Reference: [Full Text Search](https://supabase.com/docs/guides/database/full-text-search) diff --git a/.agent/skills/supabase-postgres-best-practices/references/advanced-jsonb-indexing.md b/.agent/skills/supabase-postgres-best-practices/references/advanced-jsonb-indexing.md new file mode 100644 index 0000000..e3d261e --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/advanced-jsonb-indexing.md @@ -0,0 +1,49 @@ +--- +title: Index JSONB Columns for Efficient Querying +impact: MEDIUM +impactDescription: 10-100x faster JSONB queries with proper indexing +tags: jsonb, gin, indexes, json +--- + +## Index JSONB Columns for Efficient Querying + +JSONB queries without indexes scan the entire table. Use GIN indexes for containment queries. + +**Incorrect (no index on JSONB):** + +```sql +create table products ( + id bigint primary key, + attributes jsonb +); + +-- Full table scan for every query +select * from products where attributes @> '{"color": "red"}'; +select * from products where attributes->>'brand' = 'Nike'; +``` + +**Correct (GIN index for JSONB):** + +```sql +-- GIN index for containment operators (@>, ?, ?&, ?|) +create index products_attrs_gin on products using gin (attributes); + +-- Now containment queries use the index +select * from products where attributes @> '{"color": "red"}'; + +-- For specific key lookups, use expression index +create index products_brand_idx on products ((attributes->>'brand')); +select * from products where attributes->>'brand' = 'Nike'; +``` + +Choose the right operator class: + +```sql +-- jsonb_ops (default): supports all operators, larger index +create index idx1 on products using gin (attributes); + +-- jsonb_path_ops: only @> operator, but 2-3x smaller index +create index idx2 on products using gin (attributes jsonb_path_ops); +``` + +Reference: [JSONB Indexes](https://www.postgresql.org/docs/current/datatype-json.html#JSON-INDEXING) diff --git a/.agent/skills/supabase-postgres-best-practices/references/conn-idle-timeout.md b/.agent/skills/supabase-postgres-best-practices/references/conn-idle-timeout.md new file mode 100644 index 0000000..40b9cc5 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/conn-idle-timeout.md @@ -0,0 +1,46 @@ +--- +title: Configure Idle Connection Timeouts +impact: HIGH +impactDescription: Reclaim 30-50% of connection slots from idle clients +tags: connections, timeout, idle, resource-management +--- + +## Configure Idle Connection Timeouts + +Idle connections waste resources. Configure timeouts to automatically reclaim them. + +**Incorrect (connections held indefinitely):** + +```sql +-- No timeout configured +show idle_in_transaction_session_timeout; -- 0 (disabled) + +-- Connections stay open forever, even when idle +select pid, state, state_change, query +from pg_stat_activity +where state = 'idle in transaction'; +-- Shows transactions idle for hours, holding locks +``` + +**Correct (automatic cleanup of idle connections):** + +```sql +-- Terminate connections idle in transaction after 30 seconds +alter system set idle_in_transaction_session_timeout = '30s'; + +-- Terminate completely idle connections after 10 minutes +alter system set idle_session_timeout = '10min'; + +-- Reload configuration +select pg_reload_conf(); +``` + +For pooled connections, configure at the pooler level: + +```ini +# pgbouncer.ini +server_idle_timeout = 60 +client_idle_timeout = 300 +``` + +Reference: [Connection Timeouts](https://www.postgresql.org/docs/current/runtime-config-client.html#GUC-IDLE-IN-TRANSACTION-SESSION-TIMEOUT) diff --git a/.agent/skills/supabase-postgres-best-practices/references/conn-limits.md b/.agent/skills/supabase-postgres-best-practices/references/conn-limits.md new file mode 100644 index 0000000..cb3e400 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/conn-limits.md @@ -0,0 +1,44 @@ +--- +title: Set Appropriate Connection Limits +impact: CRITICAL +impactDescription: Prevent database crashes and memory exhaustion +tags: connections, max-connections, limits, stability +--- + +## Set Appropriate Connection Limits + +Too many connections exhaust memory and degrade performance. Set limits based on available resources. + +**Incorrect (unlimited or excessive connections):** + +```sql +-- Default max_connections = 100, but often increased blindly +show max_connections; -- 500 (way too high for 4GB RAM) + +-- Each connection uses 1-3MB RAM +-- 500 connections * 2MB = 1GB just for connections! +-- Out of memory errors under load +``` + +**Correct (calculate based on resources):** + +```sql +-- Formula: max_connections = (RAM in MB / 5MB per connection) - reserved +-- For 4GB RAM: (4096 / 5) - 10 = ~800 theoretical max +-- But practically, 100-200 is better for query performance + +-- Recommended settings for 4GB RAM +alter system set max_connections = 100; + +-- Also set work_mem appropriately +-- work_mem * max_connections should not exceed 25% of RAM +alter system set work_mem = '8MB'; -- 8MB * 100 = 800MB max +``` + +Monitor connection usage: + +```sql +select count(*), state from pg_stat_activity group by state; +``` + +Reference: [Database Connections](https://supabase.com/docs/guides/platform/performance#connection-management) diff --git a/.agent/skills/supabase-postgres-best-practices/references/conn-pooling.md b/.agent/skills/supabase-postgres-best-practices/references/conn-pooling.md new file mode 100644 index 0000000..e2ebd58 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/conn-pooling.md @@ -0,0 +1,41 @@ +--- +title: Use Connection Pooling for All Applications +impact: CRITICAL +impactDescription: Handle 10-100x more concurrent users +tags: connection-pooling, pgbouncer, performance, scalability +--- + +## Use Connection Pooling for All Applications + +Postgres connections are expensive (1-3MB RAM each). Without pooling, applications exhaust connections under load. + +**Incorrect (new connection per request):** + +```sql +-- Each request creates a new connection +-- Application code: db.connect() per request +-- Result: 500 concurrent users = 500 connections = crashed database + +-- Check current connections +select count(*) from pg_stat_activity; -- 487 connections! +``` + +**Correct (connection pooling):** + +```sql +-- Use a pooler like PgBouncer between app and database +-- Application connects to pooler, pooler reuses a small pool to Postgres + +-- Configure pool_size based on: (CPU cores * 2) + spindle_count +-- Example for 4 cores: pool_size = 10 + +-- Result: 500 concurrent users share 10 actual connections +select count(*) from pg_stat_activity; -- 10 connections +``` + +Pool modes: + +- **Transaction mode**: connection returned after each transaction (best for most apps) +- **Session mode**: connection held for entire session (needed for prepared statements, temp tables) + +Reference: [Connection Pooling](https://supabase.com/docs/guides/database/connecting-to-postgres#connection-pooler) diff --git a/.agent/skills/supabase-postgres-best-practices/references/conn-prepared-statements.md b/.agent/skills/supabase-postgres-best-practices/references/conn-prepared-statements.md new file mode 100644 index 0000000..555547d --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/conn-prepared-statements.md @@ -0,0 +1,46 @@ +--- +title: Use Prepared Statements Correctly with Pooling +impact: HIGH +impactDescription: Avoid prepared statement conflicts in pooled environments +tags: prepared-statements, connection-pooling, transaction-mode +--- + +## Use Prepared Statements Correctly with Pooling + +Prepared statements are tied to individual database connections. In transaction-mode pooling, connections are shared, causing conflicts. + +**Incorrect (named prepared statements with transaction pooling):** + +```sql +-- Named prepared statement +prepare get_user as select * from users where id = $1; + +-- In transaction mode pooling, next request may get different connection +execute get_user(123); +-- ERROR: prepared statement "get_user" does not exist +``` + +**Correct (use unnamed statements or session mode):** + +```sql +-- Option 1: Use unnamed prepared statements (most ORMs do this automatically) +-- The query is prepared and executed in a single protocol message + +-- Option 2: Deallocate after use in transaction mode +prepare get_user as select * from users where id = $1; +execute get_user(123); +deallocate get_user; + +-- Option 3: Use session mode pooling (port 5432 vs 6543) +-- Connection is held for entire session, prepared statements persist +``` + +Check your driver settings: + +```sql +-- Many drivers use prepared statements by default +-- Node.js pg: { prepare: false } to disable +-- JDBC: prepareThreshold=0 to disable +``` + +Reference: [Prepared Statements with Pooling](https://supabase.com/docs/guides/database/connecting-to-postgres#connection-pool-modes) diff --git a/.agent/skills/supabase-postgres-best-practices/references/data-batch-inserts.md b/.agent/skills/supabase-postgres-best-practices/references/data-batch-inserts.md new file mode 100644 index 0000000..997947c --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/data-batch-inserts.md @@ -0,0 +1,54 @@ +--- +title: Batch INSERT Statements for Bulk Data +impact: MEDIUM +impactDescription: 10-50x faster bulk inserts +tags: batch, insert, bulk, performance, copy +--- + +## Batch INSERT Statements for Bulk Data + +Individual INSERT statements have high overhead. Batch multiple rows in single statements or use COPY. + +**Incorrect (individual inserts):** + +```sql +-- Each insert is a separate transaction and round trip +insert into events (user_id, action) values (1, 'click'); +insert into events (user_id, action) values (1, 'view'); +insert into events (user_id, action) values (2, 'click'); +-- ... 1000 more individual inserts + +-- 1000 inserts = 1000 round trips = slow +``` + +**Correct (batch insert):** + +```sql +-- Multiple rows in single statement +insert into events (user_id, action) values + (1, 'click'), + (1, 'view'), + (2, 'click'), + -- ... up to ~1000 rows per batch + (999, 'view'); + +-- One round trip for 1000 rows +``` + +For large imports, use COPY: + +```sql +-- COPY is fastest for bulk loading +copy events (user_id, action, created_at) +from '/path/to/data.csv' +with (format csv, header true); + +-- Or from stdin in application +copy events (user_id, action) from stdin with (format csv); +1,click +1,view +2,click +\. +``` + +Reference: [COPY](https://www.postgresql.org/docs/current/sql-copy.html) diff --git a/.agent/skills/supabase-postgres-best-practices/references/data-n-plus-one.md b/.agent/skills/supabase-postgres-best-practices/references/data-n-plus-one.md new file mode 100644 index 0000000..2109186 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/data-n-plus-one.md @@ -0,0 +1,53 @@ +--- +title: Eliminate N+1 Queries with Batch Loading +impact: MEDIUM-HIGH +impactDescription: 10-100x fewer database round trips +tags: n-plus-one, batch, performance, queries +--- + +## Eliminate N+1 Queries with Batch Loading + +N+1 queries execute one query per item in a loop. Batch them into a single query using arrays or JOINs. + +**Incorrect (N+1 queries):** + +```sql +-- First query: get all users +select id from users where active = true; -- Returns 100 IDs + +-- Then N queries, one per user +select * from orders where user_id = 1; +select * from orders where user_id = 2; +select * from orders where user_id = 3; +-- ... 97 more queries! + +-- Total: 101 round trips to database +``` + +**Correct (single batch query):** + +```sql +-- Collect IDs and query once with ANY +select * from orders where user_id = any(array[1, 2, 3, ...]); + +-- Or use JOIN instead of loop +select u.id, u.name, o.* +from users u +left join orders o on o.user_id = u.id +where u.active = true; + +-- Total: 1 round trip +``` + +Application pattern: + +```sql +-- Instead of looping in application code: +-- for user in users: db.query("SELECT * FROM orders WHERE user_id = $1", user.id) + +-- Pass array parameter: +select * from orders where user_id = any($1::bigint[]); +-- Application passes: [1, 2, 3, 4, 5, ...] +``` + +Reference: [N+1 Query Problem](https://supabase.com/docs/guides/database/query-optimization) diff --git a/.agent/skills/supabase-postgres-best-practices/references/data-pagination.md b/.agent/skills/supabase-postgres-best-practices/references/data-pagination.md new file mode 100644 index 0000000..633d839 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/data-pagination.md @@ -0,0 +1,50 @@ +--- +title: Use Cursor-Based Pagination Instead of OFFSET +impact: MEDIUM-HIGH +impactDescription: Consistent O(1) performance regardless of page depth +tags: pagination, cursor, keyset, offset, performance +--- + +## Use Cursor-Based Pagination Instead of OFFSET + +OFFSET-based pagination scans all skipped rows, getting slower on deeper pages. Cursor pagination is O(1). + +**Incorrect (OFFSET pagination):** + +```sql +-- Page 1: scans 20 rows +select * from products order by id limit 20 offset 0; + +-- Page 100: scans 2000 rows to skip 1980 +select * from products order by id limit 20 offset 1980; + +-- Page 10000: scans 200,000 rows! +select * from products order by id limit 20 offset 199980; +``` + +**Correct (cursor/keyset pagination):** + +```sql +-- Page 1: get first 20 +select * from products order by id limit 20; +-- Application stores last_id = 20 + +-- Page 2: start after last ID +select * from products where id > 20 order by id limit 20; +-- Uses index, always fast regardless of page depth + +-- Page 10000: same speed as page 1 +select * from products where id > 199980 order by id limit 20; +``` + +For multi-column sorting: + +```sql +-- Cursor must include all sort columns +select * from products +where (created_at, id) > ('2024-01-15 10:00:00', 12345) +order by created_at, id +limit 20; +``` + +Reference: [Pagination](https://supabase.com/docs/guides/database/pagination) diff --git a/.agent/skills/supabase-postgres-best-practices/references/data-upsert.md b/.agent/skills/supabase-postgres-best-practices/references/data-upsert.md new file mode 100644 index 0000000..bc95e23 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/data-upsert.md @@ -0,0 +1,50 @@ +--- +title: Use UPSERT for Insert-or-Update Operations +impact: MEDIUM +impactDescription: Atomic operation, eliminates race conditions +tags: upsert, on-conflict, insert, update +--- + +## Use UPSERT for Insert-or-Update Operations + +Using separate SELECT-then-INSERT/UPDATE creates race conditions. Use INSERT ... ON CONFLICT for atomic upserts. + +**Incorrect (check-then-insert race condition):** + +```sql +-- Race condition: two requests check simultaneously +select * from settings where user_id = 123 and key = 'theme'; +-- Both find nothing + +-- Both try to insert +insert into settings (user_id, key, value) values (123, 'theme', 'dark'); +-- One succeeds, one fails with duplicate key error! +``` + +**Correct (atomic UPSERT):** + +```sql +-- Single atomic operation +insert into settings (user_id, key, value) +values (123, 'theme', 'dark') +on conflict (user_id, key) +do update set value = excluded.value, updated_at = now(); + +-- Returns the inserted/updated row +insert into settings (user_id, key, value) +values (123, 'theme', 'dark') +on conflict (user_id, key) +do update set value = excluded.value +returning *; +``` + +Insert-or-ignore pattern: + +```sql +-- Insert only if not exists (no update) +insert into page_views (page_id, user_id) +values (1, 123) +on conflict (page_id, user_id) do nothing; +``` + +Reference: [INSERT ON CONFLICT](https://www.postgresql.org/docs/current/sql-insert.html#SQL-ON-CONFLICT) diff --git a/.agent/skills/supabase-postgres-best-practices/references/lock-advisory.md b/.agent/skills/supabase-postgres-best-practices/references/lock-advisory.md new file mode 100644 index 0000000..572eaf0 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/lock-advisory.md @@ -0,0 +1,56 @@ +--- +title: Use Advisory Locks for Application-Level Locking +impact: MEDIUM +impactDescription: Efficient coordination without row-level lock overhead +tags: advisory-locks, coordination, application-locks +--- + +## Use Advisory Locks for Application-Level Locking + +Advisory locks provide application-level coordination without requiring database rows to lock. + +**Incorrect (creating rows just for locking):** + +```sql +-- Creating dummy rows to lock on +create table resource_locks ( + resource_name text primary key +); + +insert into resource_locks values ('report_generator'); + +-- Lock by selecting the row +select * from resource_locks where resource_name = 'report_generator' for update; +``` + +**Correct (advisory locks):** + +```sql +-- Session-level advisory lock (released on disconnect or unlock) +select pg_advisory_lock(hashtext('report_generator')); +-- ... do exclusive work ... +select pg_advisory_unlock(hashtext('report_generator')); + +-- Transaction-level lock (released on commit/rollback) +begin; +select pg_advisory_xact_lock(hashtext('daily_report')); +-- ... do work ... +commit; -- Lock automatically released +``` + +Try-lock for non-blocking operations: + +```sql +-- Returns immediately with true/false instead of waiting +select pg_try_advisory_lock(hashtext('resource_name')); + +-- Use in application +if (acquired) { + -- Do work + select pg_advisory_unlock(hashtext('resource_name')); +} else { + -- Skip or retry later +} +``` + +Reference: [Advisory Locks](https://www.postgresql.org/docs/current/explicit-locking.html#ADVISORY-LOCKS) diff --git a/.agent/skills/supabase-postgres-best-practices/references/lock-deadlock-prevention.md b/.agent/skills/supabase-postgres-best-practices/references/lock-deadlock-prevention.md new file mode 100644 index 0000000..974da5e --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/lock-deadlock-prevention.md @@ -0,0 +1,68 @@ +--- +title: Prevent Deadlocks with Consistent Lock Ordering +impact: MEDIUM-HIGH +impactDescription: Eliminate deadlock errors, improve reliability +tags: deadlocks, locking, transactions, ordering +--- + +## Prevent Deadlocks with Consistent Lock Ordering + +Deadlocks occur when transactions lock resources in different orders. Always +acquire locks in a consistent order. + +**Incorrect (inconsistent lock ordering):** + +```sql +-- Transaction A -- Transaction B +begin; begin; +update accounts update accounts +set balance = balance - 100 set balance = balance - 50 +where id = 1; where id = 2; -- B locks row 2 + +update accounts update accounts +set balance = balance + 100 set balance = balance + 50 +where id = 2; -- A waits for B where id = 1; -- B waits for A + +-- DEADLOCK! Both waiting for each other +``` + +**Correct (lock rows in consistent order first):** + +```sql +-- Explicitly acquire locks in ID order before updating +begin; +select * from accounts where id in (1, 2) order by id for update; + +-- Now perform updates in any order - locks already held +update accounts set balance = balance - 100 where id = 1; +update accounts set balance = balance + 100 where id = 2; +commit; +``` + +Alternative: use a single statement to update atomically: + +```sql +-- Single statement acquires all locks atomically +begin; +update accounts +set balance = balance + case id + when 1 then -100 + when 2 then 100 +end +where id in (1, 2); +commit; +``` + +Detect deadlocks in logs: + +```sql +-- Check for recent deadlocks +select * from pg_stat_database where deadlocks > 0; + +-- Enable deadlock logging +set log_lock_waits = on; +set deadlock_timeout = '1s'; +``` + +Reference: +[Deadlocks](https://www.postgresql.org/docs/current/explicit-locking.html#LOCKING-DEADLOCKS) diff --git a/.agent/skills/supabase-postgres-best-practices/references/lock-short-transactions.md b/.agent/skills/supabase-postgres-best-practices/references/lock-short-transactions.md new file mode 100644 index 0000000..e6b8ef2 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/lock-short-transactions.md @@ -0,0 +1,50 @@ +--- +title: Keep Transactions Short to Reduce Lock Contention +impact: MEDIUM-HIGH +impactDescription: 3-5x throughput improvement, fewer deadlocks +tags: transactions, locking, contention, performance +--- + +## Keep Transactions Short to Reduce Lock Contention + +Long-running transactions hold locks that block other queries. Keep transactions as short as possible. + +**Incorrect (long transaction with external calls):** + +```sql +begin; +select * from orders where id = 1 for update; -- Lock acquired + +-- Application makes HTTP call to payment API (2-5 seconds) +-- Other queries on this row are blocked! + +update orders set status = 'paid' where id = 1; +commit; -- Lock held for entire duration +``` + +**Correct (minimal transaction scope):** + +```sql +-- Validate data and call APIs outside transaction +-- Application: response = await paymentAPI.charge(...) + +-- Only hold lock for the actual update +begin; +update orders +set status = 'paid', payment_id = $1 +where id = $2 and status = 'pending' +returning *; +commit; -- Lock held for milliseconds +``` + +Use `statement_timeout` to prevent runaway transactions: + +```sql +-- Abort queries running longer than 30 seconds +set statement_timeout = '30s'; + +-- Or per-session +set local statement_timeout = '5s'; +``` + +Reference: [Transaction Management](https://www.postgresql.org/docs/current/tutorial-transactions.html) diff --git a/.agent/skills/supabase-postgres-best-practices/references/lock-skip-locked.md b/.agent/skills/supabase-postgres-best-practices/references/lock-skip-locked.md new file mode 100644 index 0000000..77bdbb9 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/lock-skip-locked.md @@ -0,0 +1,54 @@ +--- +title: Use SKIP LOCKED for Non-Blocking Queue Processing +impact: MEDIUM-HIGH +impactDescription: 10x throughput for worker queues +tags: skip-locked, queue, workers, concurrency +--- + +## Use SKIP LOCKED for Non-Blocking Queue Processing + +When multiple workers process a queue, SKIP LOCKED allows workers to process different rows without waiting. + +**Incorrect (workers block each other):** + +```sql +-- Worker 1 and Worker 2 both try to get next job +begin; +select * from jobs where status = 'pending' order by created_at limit 1 for update; +-- Worker 2 waits for Worker 1's lock to release! +``` + +**Correct (SKIP LOCKED for parallel processing):** + +```sql +-- Each worker skips locked rows and gets the next available +begin; +select * from jobs +where status = 'pending' +order by created_at +limit 1 +for update skip locked; + +-- Worker 1 gets job 1, Worker 2 gets job 2 (no waiting) + +update jobs set status = 'processing' where id = $1; +commit; +``` + +Complete queue pattern: + +```sql +-- Atomic claim-and-update in one statement +update jobs +set status = 'processing', worker_id = $1, started_at = now() +where id = ( + select id from jobs + where status = 'pending' + order by created_at + limit 1 + for update skip locked +) +returning *; +``` + +Reference: [SELECT FOR UPDATE SKIP LOCKED](https://www.postgresql.org/docs/current/sql-select.html#SQL-FOR-UPDATE-SHARE) diff --git a/.agent/skills/supabase-postgres-best-practices/references/monitor-explain-analyze.md b/.agent/skills/supabase-postgres-best-practices/references/monitor-explain-analyze.md new file mode 100644 index 0000000..542978c --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/monitor-explain-analyze.md @@ -0,0 +1,45 @@ +--- +title: Use EXPLAIN ANALYZE to Diagnose Slow Queries +impact: LOW-MEDIUM +impactDescription: Identify exact bottlenecks in query execution +tags: explain, analyze, diagnostics, query-plan +--- + +## Use EXPLAIN ANALYZE to Diagnose Slow Queries + +EXPLAIN ANALYZE executes the query and shows actual timings, revealing the true performance bottlenecks. + +**Incorrect (guessing at performance issues):** + +```sql +-- Query is slow, but why? +select * from orders where customer_id = 123 and status = 'pending'; +-- "It must be missing an index" - but which one? +``` + +**Correct (use EXPLAIN ANALYZE):** + +```sql +explain (analyze, buffers, format text) +select * from orders where customer_id = 123 and status = 'pending'; + +-- Output reveals the issue: +-- Seq Scan on orders (cost=0.00..25000.00 rows=50 width=100) (actual time=0.015..450.123 rows=50 loops=1) +-- Filter: ((customer_id = 123) AND (status = 'pending'::text)) +-- Rows Removed by Filter: 999950 +-- Buffers: shared hit=5000 read=15000 +-- Planning Time: 0.150 ms +-- Execution Time: 450.500 ms +``` + +Key things to look for: + +```sql +-- Seq Scan on large tables = missing index +-- Rows Removed by Filter = poor selectivity or missing index +-- Buffers: read >> hit = data not cached, needs more memory +-- Nested Loop with high loops = consider different join strategy +-- Sort Method: external merge = work_mem too low +``` + +Reference: [EXPLAIN](https://supabase.com/docs/guides/database/inspect) diff --git a/.agent/skills/supabase-postgres-best-practices/references/monitor-pg-stat-statements.md b/.agent/skills/supabase-postgres-best-practices/references/monitor-pg-stat-statements.md new file mode 100644 index 0000000..d7e82f1 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/monitor-pg-stat-statements.md @@ -0,0 +1,55 @@ +--- +title: Enable pg_stat_statements for Query Analysis +impact: LOW-MEDIUM +impactDescription: Identify top resource-consuming queries +tags: pg-stat-statements, monitoring, statistics, performance +--- + +## Enable pg_stat_statements for Query Analysis + +pg_stat_statements tracks execution statistics for all queries, helping identify slow and frequent queries. + +**Incorrect (no visibility into query patterns):** + +```sql +-- Database is slow, but which queries are the problem? +-- No way to know without pg_stat_statements +``` + +**Correct (enable and query pg_stat_statements):** + +```sql +-- Enable the extension +create extension if not exists pg_stat_statements; + +-- Find slowest queries by total time +select + calls, + round(total_exec_time::numeric, 2) as total_time_ms, + round(mean_exec_time::numeric, 2) as mean_time_ms, + query +from pg_stat_statements +order by total_exec_time desc +limit 10; + +-- Find most frequent queries +select calls, query +from pg_stat_statements +order by calls desc +limit 10; + +-- Reset statistics after optimization +select pg_stat_statements_reset(); +``` + +Key metrics to monitor: + +```sql +-- Queries with high mean time (candidates for optimization) +select query, mean_exec_time, calls +from pg_stat_statements +where mean_exec_time > 100 -- > 100ms average +order by mean_exec_time desc; +``` + +Reference: [pg_stat_statements](https://supabase.com/docs/guides/database/extensions/pg_stat_statements) diff --git a/.agent/skills/supabase-postgres-best-practices/references/monitor-vacuum-analyze.md b/.agent/skills/supabase-postgres-best-practices/references/monitor-vacuum-analyze.md new file mode 100644 index 0000000..e0e8ea0 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/monitor-vacuum-analyze.md @@ -0,0 +1,55 @@ +--- +title: Maintain Table Statistics with VACUUM and ANALYZE +impact: MEDIUM +impactDescription: 2-10x better query plans with accurate statistics +tags: vacuum, analyze, statistics, maintenance, autovacuum +--- + +## Maintain Table Statistics with VACUUM and ANALYZE + +Outdated statistics cause the query planner to make poor decisions. VACUUM reclaims space, ANALYZE updates statistics. + +**Incorrect (stale statistics):** + +```sql +-- Table has 1M rows but stats say 1000 +-- Query planner chooses wrong strategy +explain select * from orders where status = 'pending'; +-- Shows: Seq Scan (because stats show small table) +-- Actually: Index Scan would be much faster +``` + +**Correct (maintain fresh statistics):** + +```sql +-- Manually analyze after large data changes +analyze orders; + +-- Analyze specific columns used in WHERE clauses +analyze orders (status, created_at); + +-- Check when tables were last analyzed +select + relname, + last_vacuum, + last_autovacuum, + last_analyze, + last_autoanalyze +from pg_stat_user_tables +order by last_analyze nulls first; +``` + +Autovacuum tuning for busy tables: + +```sql +-- Increase frequency for high-churn tables +alter table orders set ( + autovacuum_vacuum_scale_factor = 0.05, -- Vacuum at 5% dead tuples (default 20%) + autovacuum_analyze_scale_factor = 0.02 -- Analyze at 2% changes (default 10%) +); + +-- Check autovacuum status +select * from pg_stat_progress_vacuum; +``` + +Reference: [VACUUM](https://supabase.com/docs/guides/database/database-size#vacuum-operations) diff --git a/.agent/skills/supabase-postgres-best-practices/references/query-composite-indexes.md b/.agent/skills/supabase-postgres-best-practices/references/query-composite-indexes.md new file mode 100644 index 0000000..fea6452 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/query-composite-indexes.md @@ -0,0 +1,44 @@ +--- +title: Create Composite Indexes for Multi-Column Queries +impact: HIGH +impactDescription: 5-10x faster multi-column queries +tags: indexes, composite-index, multi-column, query-optimization +--- + +## Create Composite Indexes for Multi-Column Queries + +When queries filter on multiple columns, a composite index is more efficient than separate single-column indexes. + +**Incorrect (separate indexes require bitmap scan):** + +```sql +-- Two separate indexes +create index orders_status_idx on orders (status); +create index orders_created_idx on orders (created_at); + +-- Query must combine both indexes (slower) +select * from orders where status = 'pending' and created_at > '2024-01-01'; +``` + +**Correct (composite index):** + +```sql +-- Single composite index (leftmost column first for equality checks) +create index orders_status_created_idx on orders (status, created_at); + +-- Query uses one efficient index scan +select * from orders where status = 'pending' and created_at > '2024-01-01'; +``` + +**Column order matters** - place equality columns first, range columns last: + +```sql +-- Good: status (=) before created_at (>) +create index idx on orders (status, created_at); + +-- Works for: WHERE status = 'pending' +-- Works for: WHERE status = 'pending' AND created_at > '2024-01-01' +-- Does NOT work for: WHERE created_at > '2024-01-01' (leftmost prefix rule) +``` + +Reference: [Multicolumn Indexes](https://www.postgresql.org/docs/current/indexes-multicolumn.html) diff --git a/.agent/skills/supabase-postgres-best-practices/references/query-covering-indexes.md b/.agent/skills/supabase-postgres-best-practices/references/query-covering-indexes.md new file mode 100644 index 0000000..9d2a494 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/query-covering-indexes.md @@ -0,0 +1,40 @@ +--- +title: Use Covering Indexes to Avoid Table Lookups +impact: MEDIUM-HIGH +impactDescription: 2-5x faster queries by eliminating heap fetches +tags: indexes, covering-index, include, index-only-scan +--- + +## Use Covering Indexes to Avoid Table Lookups + +Covering indexes include all columns needed by a query, enabling index-only scans that skip the table entirely. + +**Incorrect (index scan + heap fetch):** + +```sql +create index users_email_idx on users (email); + +-- Must fetch name and created_at from table heap +select email, name, created_at from users where email = 'user@example.com'; +``` + +**Correct (index-only scan with INCLUDE):** + +```sql +-- Include non-searchable columns in the index +create index users_email_idx on users (email) include (name, created_at); + +-- All columns served from index, no table access needed +select email, name, created_at from users where email = 'user@example.com'; +``` + +Use INCLUDE for columns you SELECT but don't filter on: + +```sql +-- Searching by status, but also need customer_id and total +create index orders_status_idx on orders (status) include (customer_id, total); + +select status, customer_id, total from orders where status = 'shipped'; +``` + +Reference: [Index-Only Scans](https://www.postgresql.org/docs/current/indexes-index-only-scans.html) diff --git a/.agent/skills/supabase-postgres-best-practices/references/query-index-types.md b/.agent/skills/supabase-postgres-best-practices/references/query-index-types.md new file mode 100644 index 0000000..93b3259 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/query-index-types.md @@ -0,0 +1,48 @@ +--- +title: Choose the Right Index Type for Your Data +impact: HIGH +impactDescription: 10-100x improvement with correct index type +tags: indexes, btree, gin, gist, brin, hash, index-types +--- + +## Choose the Right Index Type for Your Data + +Different index types excel at different query patterns. The default B-tree isn't always optimal. + +**Incorrect (B-tree for JSONB containment):** + +```sql +-- B-tree cannot optimize containment operators +create index products_attrs_idx on products (attributes); +select * from products where attributes @> '{"color": "red"}'; +-- Full table scan - B-tree doesn't support @> operator +``` + +**Correct (GIN for JSONB):** + +```sql +-- GIN supports @>, ?, ?&, ?| operators +create index products_attrs_idx on products using gin (attributes); +select * from products where attributes @> '{"color": "red"}'; +``` + +Index type guide: + +```sql +-- B-tree (default): =, <, >, BETWEEN, IN, IS NULL +create index users_created_idx on users (created_at); + +-- GIN: arrays, JSONB, full-text search +create index posts_tags_idx on posts using gin (tags); + +-- GiST: geometric data, range types, nearest-neighbor (KNN) queries +create index locations_idx on places using gist (location); + +-- BRIN: large time-series tables (10-100x smaller) +create index events_time_idx on events using brin (created_at); + +-- Hash: equality-only (slightly faster than B-tree for =) +create index sessions_token_idx on sessions using hash (token); +``` + +Reference: [Index Types](https://www.postgresql.org/docs/current/indexes-types.html) diff --git a/.agent/skills/supabase-postgres-best-practices/references/query-missing-indexes.md b/.agent/skills/supabase-postgres-best-practices/references/query-missing-indexes.md new file mode 100644 index 0000000..e6daace --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/query-missing-indexes.md @@ -0,0 +1,43 @@ +--- +title: Add Indexes on WHERE and JOIN Columns +impact: CRITICAL +impactDescription: 100-1000x faster queries on large tables +tags: indexes, performance, sequential-scan, query-optimization +--- + +## Add Indexes on WHERE and JOIN Columns + +Queries filtering or joining on unindexed columns cause full table scans, which become exponentially slower as tables grow. + +**Incorrect (sequential scan on large table):** + +```sql +-- No index on customer_id causes full table scan +select * from orders where customer_id = 123; + +-- EXPLAIN shows: Seq Scan on orders (cost=0.00..25000.00 rows=100 width=85) +``` + +**Correct (index scan):** + +```sql +-- Create index on frequently filtered column +create index orders_customer_id_idx on orders (customer_id); + +select * from orders where customer_id = 123; + +-- EXPLAIN shows: Index Scan using orders_customer_id_idx (cost=0.42..8.44 rows=100 width=85) +``` + +For JOIN columns, always index the foreign key side: + +```sql +-- Index the referencing column +create index orders_customer_id_idx on orders (customer_id); + +select c.name, o.total +from customers c +join orders o on o.customer_id = c.id; +``` + +Reference: [Query Optimization](https://supabase.com/docs/guides/database/query-optimization) diff --git a/.agent/skills/supabase-postgres-best-practices/references/query-partial-indexes.md b/.agent/skills/supabase-postgres-best-practices/references/query-partial-indexes.md new file mode 100644 index 0000000..3e61a34 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/query-partial-indexes.md @@ -0,0 +1,45 @@ +--- +title: Use Partial Indexes for Filtered Queries +impact: HIGH +impactDescription: 5-20x smaller indexes, faster writes and queries +tags: indexes, partial-index, query-optimization, storage +--- + +## Use Partial Indexes for Filtered Queries + +Partial indexes only include rows matching a WHERE condition, making them smaller and faster when queries consistently filter on the same condition. + +**Incorrect (full index includes irrelevant rows):** + +```sql +-- Index includes all rows, even soft-deleted ones +create index users_email_idx on users (email); + +-- Query always filters active users +select * from users where email = 'user@example.com' and deleted_at is null; +``` + +**Correct (partial index matches query filter):** + +```sql +-- Index only includes active users +create index users_active_email_idx on users (email) +where deleted_at is null; + +-- Query uses the smaller, faster index +select * from users where email = 'user@example.com' and deleted_at is null; +``` + +Common use cases for partial indexes: + +```sql +-- Only pending orders (status rarely changes once completed) +create index orders_pending_idx on orders (created_at) +where status = 'pending'; + +-- Only non-null values +create index products_sku_idx on products (sku) +where sku is not null; +``` + +Reference: [Partial Indexes](https://www.postgresql.org/docs/current/indexes-partial.html) diff --git a/.agent/skills/supabase-postgres-best-practices/references/schema-constraints.md b/.agent/skills/supabase-postgres-best-practices/references/schema-constraints.md new file mode 100644 index 0000000..1d2ef8f --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/schema-constraints.md @@ -0,0 +1,80 @@ +--- +title: Add Constraints Safely in Migrations +impact: HIGH +impactDescription: Prevents migration failures and enables idempotent schema changes +tags: constraints, migrations, schema, alter-table +--- + +## Add Constraints Safely in Migrations + +PostgreSQL does not support `ADD CONSTRAINT IF NOT EXISTS`. Migrations using this syntax will fail. + +**Incorrect (causes syntax error):** + +```sql +-- ERROR: syntax error at or near "not" (SQLSTATE 42601) +alter table public.profiles +add constraint if not exists profiles_birthchart_id_unique unique (birthchart_id); +``` + +**Correct (idempotent constraint creation):** + +```sql +-- Use DO block to check before adding +do $$ +begin + if not exists ( + select 1 from pg_constraint + where conname = 'profiles_birthchart_id_unique' + and conrelid = 'public.profiles'::regclass + ) then + alter table public.profiles + add constraint profiles_birthchart_id_unique unique (birthchart_id); + end if; +end $$; +``` + +For all constraint types: + +```sql +-- Check constraints +do $$ +begin + if not exists ( + select 1 from pg_constraint + where conname = 'check_age_positive' + ) then + alter table users add constraint check_age_positive check (age > 0); + end if; +end $$; + +-- Foreign keys +do $$ +begin + if not exists ( + select 1 from pg_constraint + where conname = 'profiles_birthchart_id_fkey' + ) then + alter table profiles + add constraint profiles_birthchart_id_fkey + foreign key (birthchart_id) references birthcharts(id); + end if; +end $$; +``` + +Check if constraint exists: + +```sql +-- Query to check constraint existence +select conname, contype, pg_get_constraintdef(oid) +from pg_constraint +where conrelid = 'public.profiles'::regclass; + +-- contype values: +-- 'p' = PRIMARY KEY +-- 'f' = FOREIGN KEY +-- 'u' = UNIQUE +-- 'c' = CHECK +``` + +Reference: [Constraints](https://www.postgresql.org/docs/current/ddl-constraints.html) diff --git a/.agent/skills/supabase-postgres-best-practices/references/schema-data-types.md b/.agent/skills/supabase-postgres-best-practices/references/schema-data-types.md new file mode 100644 index 0000000..f253a58 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/schema-data-types.md @@ -0,0 +1,46 @@ +--- +title: Choose Appropriate Data Types +impact: HIGH +impactDescription: 50% storage reduction, faster comparisons +tags: data-types, schema, storage, performance +--- + +## Choose Appropriate Data Types + +Using the right data types reduces storage, improves query performance, and prevents bugs. + +**Incorrect (wrong data types):** + +```sql +create table users ( + id int, -- Will overflow at 2.1 billion + email varchar(255), -- Unnecessary length limit + created_at timestamp, -- Missing timezone info + is_active varchar(5), -- String for boolean + price varchar(20) -- String for numeric +); +``` + +**Correct (appropriate data types):** + +```sql +create table users ( + id bigint generated always as identity primary key, -- 9 quintillion max + email text, -- No artificial limit, same performance as varchar + created_at timestamptz, -- Always store timezone-aware timestamps + is_active boolean default true, -- 1 byte vs variable string length + price numeric(10,2) -- Exact decimal arithmetic +); +``` + +Key guidelines: + +```sql +-- IDs: use bigint, not int (future-proofing) +-- Strings: use text, not varchar(n) unless constraint needed +-- Time: use timestamptz, not timestamp +-- Money: use numeric, not float (precision matters) +-- Enums: use text with check constraint or create enum type +``` + +Reference: [Data Types](https://www.postgresql.org/docs/current/datatype.html) diff --git a/.agent/skills/supabase-postgres-best-practices/references/schema-foreign-key-indexes.md b/.agent/skills/supabase-postgres-best-practices/references/schema-foreign-key-indexes.md new file mode 100644 index 0000000..6c3d6ff --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/schema-foreign-key-indexes.md @@ -0,0 +1,59 @@ +--- +title: Index Foreign Key Columns +impact: HIGH +impactDescription: 10-100x faster JOINs and CASCADE operations +tags: foreign-key, indexes, joins, schema +--- + +## Index Foreign Key Columns + +Postgres does not automatically index foreign key columns. Missing indexes cause slow JOINs and CASCADE operations. + +**Incorrect (unindexed foreign key):** + +```sql +create table orders ( + id bigint generated always as identity primary key, + customer_id bigint references customers(id) on delete cascade, + total numeric(10,2) +); + +-- No index on customer_id! +-- JOINs and ON DELETE CASCADE both require full table scan +select * from orders where customer_id = 123; -- Seq Scan +delete from customers where id = 123; -- Locks table, scans all orders +``` + +**Correct (indexed foreign key):** + +```sql +create table orders ( + id bigint generated always as identity primary key, + customer_id bigint references customers(id) on delete cascade, + total numeric(10,2) +); + +-- Always index the FK column +create index orders_customer_id_idx on orders (customer_id); + +-- Now JOINs and cascades are fast +select * from orders where customer_id = 123; -- Index Scan +delete from customers where id = 123; -- Uses index, fast cascade +``` + +Find missing FK indexes: + +```sql +select + conrelid::regclass as table_name, + a.attname as fk_column +from pg_constraint c +join pg_attribute a on a.attrelid = c.conrelid and a.attnum = any(c.conkey) +where c.contype = 'f' + and not exists ( + select 1 from pg_index i + where i.indrelid = c.conrelid and a.attnum = any(i.indkey) + ); +``` + +Reference: [Foreign Keys](https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-FK) diff --git a/.agent/skills/supabase-postgres-best-practices/references/schema-lowercase-identifiers.md b/.agent/skills/supabase-postgres-best-practices/references/schema-lowercase-identifiers.md new file mode 100644 index 0000000..f007294 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/schema-lowercase-identifiers.md @@ -0,0 +1,55 @@ +--- +title: Use Lowercase Identifiers for Compatibility +impact: MEDIUM +impactDescription: Avoid case-sensitivity bugs with tools, ORMs, and AI assistants +tags: naming, identifiers, case-sensitivity, schema, conventions +--- + +## Use Lowercase Identifiers for Compatibility + +PostgreSQL folds unquoted identifiers to lowercase. Quoted mixed-case identifiers require quotes forever and cause issues with tools, ORMs, and AI assistants that may not recognize them. + +**Incorrect (mixed-case identifiers):** + +```sql +-- Quoted identifiers preserve case but require quotes everywhere +CREATE TABLE "Users" ( + "userId" bigint PRIMARY KEY, + "firstName" text, + "lastName" text +); + +-- Must always quote or queries fail +SELECT "firstName" FROM "Users" WHERE "userId" = 1; + +-- This fails - Users becomes users without quotes +SELECT firstName FROM Users; +-- ERROR: relation "users" does not exist +``` + +**Correct (lowercase snake_case):** + +```sql +-- Unquoted lowercase identifiers are portable and tool-friendly +CREATE TABLE users ( + user_id bigint PRIMARY KEY, + first_name text, + last_name text +); + +-- Works without quotes, recognized by all tools +SELECT first_name FROM users WHERE user_id = 1; +``` + +Common sources of mixed-case identifiers: + +```sql +-- ORMs often generate quoted camelCase - configure them to use snake_case +-- Migrations from other databases may preserve original casing +-- Some GUI tools quote identifiers by default - disable this + +-- If stuck with mixed-case, create views as a compatibility layer +CREATE VIEW users AS SELECT "userId" AS user_id, "firstName" AS first_name FROM "Users"; +``` + +Reference: [Identifiers and Key Words](https://www.postgresql.org/docs/current/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS) diff --git a/.agent/skills/supabase-postgres-best-practices/references/schema-partitioning.md b/.agent/skills/supabase-postgres-best-practices/references/schema-partitioning.md new file mode 100644 index 0000000..13137a0 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/schema-partitioning.md @@ -0,0 +1,55 @@ +--- +title: Partition Large Tables for Better Performance +impact: MEDIUM-HIGH +impactDescription: 5-20x faster queries and maintenance on large tables +tags: partitioning, large-tables, time-series, performance +--- + +## Partition Large Tables for Better Performance + +Partitioning splits a large table into smaller pieces, improving query performance and maintenance operations. + +**Incorrect (single large table):** + +```sql +create table events ( + id bigint generated always as identity, + created_at timestamptz, + data jsonb +); + +-- 500M rows, queries scan everything +select * from events where created_at > '2024-01-01'; -- Slow +vacuum events; -- Takes hours, locks table +``` + +**Correct (partitioned by time range):** + +```sql +create table events ( + id bigint generated always as identity, + created_at timestamptz not null, + data jsonb +) partition by range (created_at); + +-- Create partitions for each month +create table events_2024_01 partition of events + for values from ('2024-01-01') to ('2024-02-01'); + +create table events_2024_02 partition of events + for values from ('2024-02-01') to ('2024-03-01'); + +-- Queries only scan relevant partitions +select * from events where created_at > '2024-01-15'; -- Only scans events_2024_01+ + +-- Drop old data instantly +drop table events_2023_01; -- Instant vs DELETE taking hours +``` + +When to partition: + +- Tables > 100M rows +- Time-series data with date-based queries +- Need to efficiently drop old data + +Reference: [Table Partitioning](https://www.postgresql.org/docs/current/ddl-partitioning.html) diff --git a/.agent/skills/supabase-postgres-best-practices/references/schema-primary-keys.md b/.agent/skills/supabase-postgres-best-practices/references/schema-primary-keys.md new file mode 100644 index 0000000..fb0fbb1 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/schema-primary-keys.md @@ -0,0 +1,61 @@ +--- +title: Select Optimal Primary Key Strategy +impact: HIGH +impactDescription: Better index locality, reduced fragmentation +tags: primary-key, identity, uuid, serial, schema +--- + +## Select Optimal Primary Key Strategy + +Primary key choice affects insert performance, index size, and replication +efficiency. + +**Incorrect (problematic PK choices):** + +```sql +-- identity is the SQL-standard approach +create table users ( + id serial primary key -- Works, but IDENTITY is recommended +); + +-- Random UUIDs (v4) cause index fragmentation +create table orders ( + id uuid default gen_random_uuid() primary key -- UUIDv4 = random = scattered inserts +); +``` + +**Correct (optimal PK strategies):** + +```sql +-- Use IDENTITY for sequential IDs (SQL-standard, best for most cases) +create table users ( + id bigint generated always as identity primary key +); + +-- For distributed systems needing UUIDs, use UUIDv7 (time-ordered) +-- Requires pg_uuidv7 extension: create extension pg_uuidv7; +create table orders ( + id uuid default uuid_generate_v7() primary key -- Time-ordered, no fragmentation +); + +-- Alternative: time-prefixed IDs for sortable, distributed IDs (no extension needed) +create table events ( + id text default concat( + to_char(now() at time zone 'utc', 'YYYYMMDDHH24MISSMS'), + gen_random_uuid()::text + ) primary key +); +``` + +Guidelines: + +- Single database: `bigint identity` (sequential, 8 bytes, SQL-standard) +- Distributed/exposed IDs: UUIDv7 (requires pg_uuidv7) or ULID (time-ordered, no + fragmentation) +- `serial` works but `identity` is SQL-standard and preferred for new + applications +- Avoid random UUIDs (v4) as primary keys on large tables (causes index + fragmentation) + +Reference: +[Identity Columns](https://www.postgresql.org/docs/current/sql-createtable.html#SQL-CREATETABLE-PARMS-GENERATED-IDENTITY) diff --git a/.agent/skills/supabase-postgres-best-practices/references/security-privileges.md b/.agent/skills/supabase-postgres-best-practices/references/security-privileges.md new file mode 100644 index 0000000..448ec34 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/security-privileges.md @@ -0,0 +1,54 @@ +--- +title: Apply Principle of Least Privilege +impact: MEDIUM +impactDescription: Reduced attack surface, better audit trail +tags: privileges, security, roles, permissions +--- + +## Apply Principle of Least Privilege + +Grant only the minimum permissions required. Never use superuser for application queries. + +**Incorrect (overly broad permissions):** + +```sql +-- Application uses superuser connection +-- Or grants ALL to application role +grant all privileges on all tables in schema public to app_user; +grant all privileges on all sequences in schema public to app_user; + +-- Any SQL injection becomes catastrophic +-- drop table users; cascades to everything +``` + +**Correct (minimal, specific grants):** + +```sql +-- Create role with no default privileges +create role app_readonly nologin; + +-- Grant only SELECT on specific tables +grant usage on schema public to app_readonly; +grant select on public.products, public.categories to app_readonly; + +-- Create role for writes with limited scope +create role app_writer nologin; +grant usage on schema public to app_writer; +grant select, insert, update on public.orders to app_writer; +grant usage on sequence orders_id_seq to app_writer; +-- No DELETE permission + +-- Login role inherits from these +create role app_user login password 'xxx'; +grant app_writer to app_user; +``` + +Revoke public defaults: + +```sql +-- Revoke default public access +revoke all on schema public from public; +revoke all on all tables in schema public from public; +``` + +Reference: [Roles and Privileges](https://supabase.com/blog/postgres-roles-and-privileges) diff --git a/.agent/skills/supabase-postgres-best-practices/references/security-rls-basics.md b/.agent/skills/supabase-postgres-best-practices/references/security-rls-basics.md new file mode 100644 index 0000000..c61e1a8 --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/security-rls-basics.md @@ -0,0 +1,50 @@ +--- +title: Enable Row Level Security for Multi-Tenant Data +impact: CRITICAL +impactDescription: Database-enforced tenant isolation, prevent data leaks +tags: rls, row-level-security, multi-tenant, security +--- + +## Enable Row Level Security for Multi-Tenant Data + +Row Level Security (RLS) enforces data access at the database level, ensuring users only see their own data. + +**Incorrect (application-level filtering only):** + +```sql +-- Relying only on application to filter +select * from orders where user_id = $current_user_id; + +-- Bug or bypass means all data is exposed! +select * from orders; -- Returns ALL orders +``` + +**Correct (database-enforced RLS):** + +```sql +-- Enable RLS on the table +alter table orders enable row level security; + +-- Create policy for users to see only their orders +create policy orders_user_policy on orders + for all + using (user_id = current_setting('app.current_user_id')::bigint); + +-- Force RLS even for table owners +alter table orders force row level security; + +-- Set user context and query +set app.current_user_id = '123'; +select * from orders; -- Only returns orders for user 123 +``` + +Policy for authenticated role: + +```sql +create policy orders_user_policy on orders + for all + to authenticated + using (user_id = auth.uid()); +``` + +Reference: [Row Level Security](https://supabase.com/docs/guides/database/postgres/row-level-security) diff --git a/.agent/skills/supabase-postgres-best-practices/references/security-rls-performance.md b/.agent/skills/supabase-postgres-best-practices/references/security-rls-performance.md new file mode 100644 index 0000000..b32d92f --- /dev/null +++ b/.agent/skills/supabase-postgres-best-practices/references/security-rls-performance.md @@ -0,0 +1,57 @@ +--- +title: Optimize RLS Policies for Performance +impact: HIGH +impactDescription: 5-10x faster RLS queries with proper patterns +tags: rls, performance, security, optimization +--- + +## Optimize RLS Policies for Performance + +Poorly written RLS policies can cause severe performance issues. Use subqueries and indexes strategically. + +**Incorrect (function called for every row):** + +```sql +create policy orders_policy on orders + using (auth.uid() = user_id); -- auth.uid() called per row! + +-- With 1M rows, auth.uid() is called 1M times +``` + +**Correct (wrap functions in SELECT):** + +```sql +create policy orders_policy on orders + using ((select auth.uid()) = user_id); -- Called once, cached + +-- 100x+ faster on large tables +``` + +Use security definer functions for complex checks: + +```sql +-- Create helper function (runs as definer, bypasses RLS) +create or replace function is_team_member(team_id bigint) +returns boolean +language sql +security definer +set search_path = '' +as $$ + select exists ( + select 1 from public.team_members + where team_id = $1 and user_id = (select auth.uid()) + ); +$$; + +-- Use in policy (indexed lookup, not per-row check) +create policy team_orders_policy on orders + using ((select is_team_member(team_id))); +``` + +Always add indexes on columns used in RLS policies: + +```sql +create index orders_user_id_idx on orders (user_id); +``` + +Reference: [RLS Performance](https://supabase.com/docs/guides/database/postgres/row-level-security#rls-performance-recommendations) diff --git a/.agent/skills/tanstack-router-best-practices/SKILL.md b/.agent/skills/tanstack-router-best-practices/SKILL.md new file mode 100644 index 0000000..fd76142 --- /dev/null +++ b/.agent/skills/tanstack-router-best-practices/SKILL.md @@ -0,0 +1,113 @@ +--- +name: tanstack-router-best-practices +description: TanStack Router best practices for type-safe routing, data loading, search params, and navigation. Activate when building React applications with complex routing needs. +--- + +# TanStack Router Best Practices + +Comprehensive guidelines for implementing TanStack Router patterns in React applications. These rules optimize type safety, data loading, navigation, and code organization. + +## When to Apply + +- Setting up application routing +- Creating new routes and layouts +- Implementing search parameter handling +- Configuring data loaders +- Setting up code splitting +- Integrating with TanStack Query +- Refactoring navigation patterns + +## Rule Categories by Priority + +| Priority | Category | Rules | Impact | +|----------|----------|-------|--------| +| CRITICAL | Type Safety | 4 rules | Prevents runtime errors and enables refactoring | +| CRITICAL | Route Organization | 5 rules | Ensures maintainable route structure | +| HIGH | Router Config | 1 rule | Global router defaults | +| HIGH | Data Loading | 6 rules | Optimizes data fetching and caching | +| HIGH | Search Params | 5 rules | Enables type-safe URL state | +| HIGH | Error Handling | 1 rule | Handles 404 and errors gracefully | +| MEDIUM | Navigation | 5 rules | Improves UX and accessibility | +| MEDIUM | Code Splitting | 3 rules | Reduces bundle size | +| MEDIUM | Preloading | 3 rules | Improves perceived performance | +| LOW | Route Context | 3 rules | Enables dependency injection | + +## Quick Reference + +### Type Safety (Prefix: `ts-`) + +- `ts-register-router` — Register router type for global inference +- `ts-use-from-param` — Use `from` parameter for type narrowing +- `ts-route-context-typing` — Type route context with createRootRouteWithContext +- `ts-query-options-loader` — Use queryOptions in loaders for type inference + +### Router Config (Prefix: `router-`) + +- `router-default-options` — Configure router defaults (scrollRestoration, defaultErrorComponent, etc.) + +### Route Organization (Prefix: `org-`) + +- `org-file-based-routing` — Prefer file-based routing for conventions +- `org-route-tree-structure` — Follow hierarchical route tree patterns +- `org-pathless-layouts` — Use pathless routes for shared layouts +- `org-index-routes` — Understand index vs layout routes +- `org-virtual-routes` — Understand virtual file routes + +### Data Loading (Prefix: `load-`) + +- `load-use-loaders` — Use route loaders for data fetching +- `load-loader-deps` — Define loaderDeps for cache control +- `load-ensure-query-data` — Use ensureQueryData with TanStack Query +- `load-deferred-data` — Split critical and non-critical data +- `load-error-handling` — Handle loader errors appropriately +- `load-parallel` — Leverage parallel route loading + +### Search Params (Prefix: `search-`) + +- `search-validation` — Always validate search params +- `search-type-inheritance` — Leverage parent search param types +- `search-middleware` — Use search param middleware +- `search-defaults` — Provide sensible defaults +- `search-custom-serializer` — Configure custom search param serializers + +### Error Handling (Prefix: `err-`) + +- `err-not-found` — Handle not-found routes properly + +### Navigation (Prefix: `nav-`) + +- `nav-link-component` — Prefer Link component for navigation +- `nav-active-states` — Configure active link states +- `nav-use-navigate` — Use useNavigate for programmatic navigation +- `nav-relative-paths` — Understand relative path navigation +- `nav-route-masks` — Use route masks for modal URLs + +### Code Splitting (Prefix: `split-`) + +- `split-lazy-routes` — Use .lazy.tsx for code splitting +- `split-critical-path` — Keep critical config in main route file +- `split-auto-splitting` — Enable autoCodeSplitting when possible + +### Preloading (Prefix: `preload-`) + +- `preload-intent` — Enable intent-based preloading +- `preload-stale-time` — Configure preload stale time +- `preload-manual` — Use manual preloading strategically + +### Route Context (Prefix: `ctx-`) + +- `ctx-root-context` — Define context at root route +- `ctx-before-load` — Extend context in beforeLoad +- `ctx-dependency-injection` — Use context for dependency injection + +## How to Use + +Each rule file in the `rules/` directory contains: +1. **Explanation** — Why this pattern matters +2. **Bad Example** — Anti-pattern to avoid +3. **Good Example** — Recommended implementation +4. **Context** — When to apply or skip this rule + +## Full Reference + +See individual rule files in `rules/` directory for detailed guidance and code examples. diff --git a/.agent/skills/tanstack-router-best-practices/rules/ctx-root-context.md b/.agent/skills/tanstack-router-best-practices/rules/ctx-root-context.md new file mode 100644 index 0000000..ef7648c --- /dev/null +++ b/.agent/skills/tanstack-router-best-practices/rules/ctx-root-context.md @@ -0,0 +1,172 @@ +# ctx-root-context: Define Context at Root Route + +## Priority: LOW + +## Explanation + +Use `createRootRouteWithContext` to define typed context that flows through your entire route tree. This enables dependency injection for things like query clients, auth state, and services. + +## Bad Example + +```tsx +// No context - importing globals directly +// routes/__root.tsx +import { createRootRoute } from '@tanstack/react-router' +import { queryClient } from '@/lib/query-client' // Global import + +export const Route = createRootRoute({ + component: RootComponent, +}) + +// routes/posts.tsx +import { queryClient } from '@/lib/query-client' // Import again + +export const Route = createFileRoute('/posts')({ + loader: async () => { + // Using global - harder to test, couples to implementation + return queryClient.ensureQueryData(postQueries.list()) + }, +}) +``` + +## Good Example + +```tsx +// routes/__root.tsx +import { createRootRouteWithContext, Outlet } from '@tanstack/react-router' +import { QueryClient } from '@tanstack/react-query' + +// Define the context interface +interface RouterContext { + queryClient: QueryClient + auth: { + user: User | null + isAuthenticated: boolean + } +} + +export const Route = createRootRouteWithContext()({ + component: RootComponent, +}) + +function RootComponent() { + return ( + <> +
+
+ +
+