·7 min read·Helium Team

How to Give AI Context About Yourself (Without Repeating It Every Time)

How to Give AI Context About Yourself (Without Repeating It Every Time)

Open ChatGPT. Start typing. Within three messages, you've written some version of this:

"I'm building a React Native app with Expo, TypeScript, and Supabase. The app uses offline-first SQLite with cloud sync. I prefer named exports, Zod for validation, and concise responses with code-first explanations."

You've typed this paragraph, or something like it, dozens of times. Maybe hundreds. Every new conversation, every new chat, every time you switch platforms. It's the tax you pay for working with an AI that has no memory of who you are.

The cold-start problem costs more than you think

When an AI starts with zero context about you, the first 3-5 messages are wasted on calibration. You explain your stack. The AI gives a generic answer. You correct it. The AI adjusts. By message 5, it finally understands your situation, and you've burned 5,000+ tokens getting there.

Multiply this across 5-10 new conversations per day, and you're spending 25-50 messages daily just on context setup. That's 30-60 minutes of typing that produces zero useful output.

The quality hit is worse than the time hit. Without context, the AI defaults to the most common interpretation of your question. Ask "how should I set up auth?" and you'll get a Firebase tutorial, even if you've told it in previous conversations (that it no longer has access to) that you use Supabase. The response isn't wrong. It's just not for you.

ChatGPT's Custom Instructions (partial solution)

ChatGPT lets you set "Custom Instructions": two text blocks that persist across every conversation. One for "What would you like ChatGPT to know about you?" and one for "How would you like ChatGPT to respond?"

This helps. It's also limited:

The space is small: 1,500 characters per block. That's about 300 words total. Enough for a brief bio and some preferences, not enough for your full tech stack, project architecture, and coding conventions.

It's ChatGPT-only. If you switch to Claude for a long-context task, your Custom Instructions don't follow. You're back to the cold start.

It's static. You work on multiple projects with different tech stacks. Custom Instructions apply globally. You can't switch them per conversation without manually editing them each time.

Claude has a similar "system prompt" capability if you're using the API, but the consumer web interface doesn't have an equivalent custom instructions feature. Gemini relies on Google's broader personalization, which is even less controllable.

Build a context document

The approach that actually scales is maintaining a personal context document: a structured text file that describes who you are, what you're building, and how you want the AI to respond. You copy it into the start of any conversation on any platform.

Here's the structure that works:

## Who I Am
Senior frontend developer, 6 years experience.
Solo founder building a B2C mobile app.

## Tech Stack
- React Native (Expo SDK 52)
- TypeScript (strict mode, no `any`)
- Supabase (PostgreSQL + Auth + Storage)
- SQLite (offline-first, cloud sync)
- Zustand for state management
- Zod for schema validation
- NativeWind (Tailwind for RN)

## Current Project
Helium: an LLM companion app that captures AI outputs,
stores conversations, manages prompts, and resurfaces
knowledge. Core loop: capture, organize, copy back
into next LLM conversation.

## Code Preferences
- Named exports everywhere
- Function declarations for components, arrows for callbacks
- Zod schemas as source of truth, types derived via z.infer<>
- Early returns over nested conditionals
- No business logic in components, extract to /core

## Communication Style
- Code first, explanation second
- Concise, skip the preamble
- If I ask for a solution, give me the code, not a description of code
- Flag trade-offs and limitations, don't just give the happy path

This is about 200 words and 1,000 tokens. Pasting it into a conversation takes 2 seconds and saves 5 minutes of calibration. The AI immediately knows your stack, your project, your preferences, and how you want responses formatted.

Multiple contexts for multiple roles

Most people don't have one context. They have several:

Your day job context has your company's tech stack, internal conventions, and team preferences.

Your side project context has a completely different stack and set of constraints.

Your learning context might describe the topics you're studying and your current skill level, so the AI calibrates its explanations appropriately.

Maintain separate context documents for each. Copy the relevant one at the start of each conversation. This sounds manual, and it is, but it's dramatically less effort than re-typing your stack every time.

ChatGPT Projects partially address this by maintaining separate instruction sets per project. But the instructions are limited in length, the format isn't portable, and you still can't use them in Claude or Gemini.

Auto-suggestions: let your usage patterns build your context

Here's something most people miss: your AI usage patterns are your context. If you've asked 30 questions about React hooks and 20 about Supabase RLS policies, that tells you exactly what belongs in your context document.

Some tools analyze your saved AI outputs and suggest context additions: "You've saved 12 TypeScript cards. Add TypeScript to your tech stack?" This is a better approach than trying to write your context document from scratch, because it's grounded in what you actually use, not what you think you should include.

Even without automated suggestions, you can do this manually. Every few weeks, scan your recent AI conversations. What topics keep coming up? What context do you keep re-explaining? Those are the items that belong in your document.

The combined context pattern

The most powerful version combines your personal context with project-specific context:

## About Me
[Your personal context: role, preferences, communication style]

## Project: Helium
[Project-specific context: tech stack, architecture, current goals]

The personal section stays constant. The project section swaps depending on what you're working on. When you paste both into a conversation, the AI knows who you are and what you're working on. First response is immediately useful. No warm-up needed.

This combined context typically runs 300-500 words (1,500-2,500 tokens). That's less than 2% of a 128K context window. A tiny investment for a massive quality improvement.

Token budgeting

Context isn't free. Every token in your context document is a token that can't be used for conversation. Here's how to think about the budget:

At 1,500 tokens for a comprehensive context document, you're using about 1.2% of a 128K context window. That leaves room for 120+ exchanges before hitting the limit. The trade-off is overwhelmingly worth it.

Where it gets tricky is context packs: when you paste your personal context, project context, and a bunch of reference material into the conversation. Now you might be at 10,000-15,000 tokens before your first message. Still manageable, but you should be intentional about what you include.

The rule of thumb: include context that will save more tokens than it costs. Your tech stack (50 tokens) saves you from re-explaining it in every response (500+ tokens across the conversation). Your coding preferences (100 tokens) prevent the AI from generating code in a style you'll have to rewrite. Both are positive ROI.

Reference docs you might need? Include them only if there's a >50% chance you'll ask about them. Otherwise, paste them in when the conversation actually goes there.

Start with 15 minutes

You don't need a perfect context document. Open a text file and write down:

  1. Your role and experience level (one sentence)
  2. Your current tech stack (a list)
  3. What you're building right now (two sentences)
  4. Three things you always tell AI about how you want responses

That's your v1. Paste it at the start of your next three AI conversations and notice the difference. The AI's first response will be specific to your situation instead of generic. You'll skip the calibration dance. And you'll wonder why you didn't do this months ago.

Helium calls this "My Context": a structured editor with multiple profiles, auto-suggestions from your usage patterns, and one-tap copy into any LLM. But a plain text file does the job. The system matters more than the tool.

aiproductivitycontextworkflow
Get Early Access