How to Reuse Code Snippets From AI Conversations

Last week you asked Claude to write a debounce hook in React. It gave you a clean implementation with TypeScript generics and proper cleanup. You used it, moved on, and closed the tab.
Today you need the same hook in a different project. You know it exists somewhere. You also know you're going to spend more time looking for it than it would take to re-ask. So you open a new chat, re-describe the requirements, and get a slightly different implementation that you'll need to re-test.
This cycle (generate, use, lose, re-generate) is the default workflow for AI-generated code. It's wasteful, and it produces inconsistent results because each re-generation is a roll of the dice.
Why AI-generated code is harder to save than you'd think
Regular code snippets are easy to save. You write them in your editor, they live in your codebase, version control handles the rest. Stack Overflow answers get bookmarked. Documentation gets linked.
AI-generated code doesn't land in any of these places naturally. It exists in a chat interface, a format designed for conversation, not storage. The code is mixed in with your prompt, the AI's explanation, your follow-up questions, and iterations. Extracting just the final, working version requires manual effort that breaks your flow.
Copy-pasting into a file works once. It doesn't work the 50th time, when you need to decide where to put it, how to name it, and how to find it again among the other 49 snippets you've saved.
Screenshots are worse. You can't copy code from an image. You can't search text inside a PNG. A folder of code screenshots is a graveyard.
What's worth saving (and what isn't)
Not every code snippet deserves a permanent home. The ones that do share a trait: you'd write them approximately the same way next time.
Save these:
Utility functions and hooks: debounce, throttle, custom React hooks, data transformation helpers. These are small, self-contained, and reusable across projects. A good useDebounce hook works the same whether you're building a search bar or an auto-save feature.
Configuration patterns: Docker compose files, ESLint configs, TypeScript compiler options, CI/CD pipeline definitions. You tweak these per project, but the base pattern is the same. Having a "starter Dockerfile for Node + Postgres" saves 10 minutes every time.
Complex queries: SQL joins, aggregation pipelines, Prisma/Drizzle patterns with specific relationship structures. These are the snippets where getting the syntax right takes multiple iterations. Save the final version, not the journey.
Regex patterns. Because no one remembers regex. The email validator, the URL parser, the semver matcher. Save them with a description of what they match and test cases.
Error handling patterns: retry logic with exponential backoff, error boundary implementations, graceful degradation patterns. These are easy to get subtly wrong and worth saving once they're right.
Don't save these:
One-off fixes for specific bugs (too contextual), basic syntax examples you could Google faster than searching your library, and anything the AI got wrong that you had to heavily modify (save your final version, not the AI's attempt).
A snippet format that actually works
The failure mode for most snippet collections is that they become unsearchable. You save 100 snippets and can't find any of them because they're titled "useEffect thing" and "SQL query 2."
Every saved snippet needs three things: a descriptive title, tags for filtering, and the context of when to use it.
## useDebounce: React Hook with TypeScript Generics
**Tags:** react, hooks, typescript, debounce
**Source:** Claude
**Works with:** React 18+, TypeScript 5+
**When to use:** Any input that triggers expensive operations
(search, API calls, calculations)
```typescript
import { useState, useEffect, useRef } from 'react';
function useDebounce<T>(value: T, delay: number): T {
const [debounced, setDebounced] = useState(value);
const timerRef = useRef<ReturnType<typeof setTimeout>>();
useEffect(() => {
timerRef.current = setTimeout(() => setDebounced(value), delay);
return () => clearTimeout(timerRef.current);
}, [value, delay]);
return debounced;
}
export default useDebounce;
```
**Usage example:**
```typescript
const searchTerm = useDebounce(inputValue, 300);
useEffect(() => {
if (searchTerm) fetchResults(searchTerm);
}, [searchTerm]);
```
**Notes:** Cleanup prevents stale updates on unmount.
For callbacks instead of values, use useCallback + setTimeout directly.
The "When to use" field is the most overlooked and most valuable. Future-you won't search for "useDebounce." Future-you will search for "delay search input" or "prevent too many API calls." The context bridges the gap between how you remember the problem and how you stored the solution.
Organizing by pattern, not by language
If your snippet library is organized by language ("JavaScript," "Python," "SQL"), you'll outgrow it fast. A better approach: organize by what the code does.
- Data Fetching: API clients, retry logic, caching patterns, pagination
- State Management: hooks, stores, reducers, derived state patterns
- Authentication: OAuth flows, JWT handling, session management
- Database: queries, migrations, schema patterns, indexing strategies
- Testing: test utilities, mock factories, fixture generators
- DevOps: Docker configs, CI/CD pipelines, deployment scripts
- UI Patterns: form validation, infinite scroll, keyboard navigation, modals
A debounce hook goes under "State Management" or "UI Patterns," not "TypeScript." This matches how you'll search for it: by the problem you're solving, not the language you're solving it in.
The version problem
AI models change. A snippet that worked perfectly with GPT-4 might need adjustments for GPT-5. React 18 patterns differ from React 19 patterns. The TypeScript version matters.
For each snippet, note:
- The AI model and date it was generated
- The framework/runtime version it targets
- Whether you've actually tested it in production
This sounds like overhead. It saves you hours the first time you pull a snippet into a new project and wonder why it doesn't work. "Oh, this was written for React 18 before the new use hook existed" is useful context.
Review your library quarterly. Delete snippets for deprecated APIs. Update patterns that have better modern alternatives. A smaller, current library beats a large, stale one.
Capture the iteration, not just the final output
When you iterate on a code snippet with an AI ("make it handle errors," "add TypeScript types," "optimize for large arrays"), the prompts are as valuable as the output. They're the instructions for reproducing and modifying the snippet.
Save the final code and the key prompt that produced it. Something like:
Prompt: "Write a React hook that debounces a value.
TypeScript generics for the value type.
Cleanup on unmount. No external dependencies."
This gives you a starting point for variations. Need a debounce hook with a different API? Modify the prompt, don't rewrite the code. The prompt is the recipe; the code is the dish.
Helium links prompts to their output cards specifically for this reason. The prompt-output pair is more valuable than either piece alone.
Start with your last 5 AI coding sessions
Open your recent conversations in ChatGPT, Claude, or whatever you've been using. Scan the last 5 coding sessions. Pull out the final, working code snippets, the ones you actually used. Save them with a title, tags, and a "when to use" note.
That's your starter library. It'll have maybe 10-15 snippets. More importantly, it'll change how you think about the next AI coding conversation. Instead of "generate and forget," you'll start thinking "generate, validate, and keep."