Claude CodeLesson 7 of 25

Getting Better Results

Claude Code is powerful enough to build almost anything. When results fall short, the cause is almost always the input. A few consistent habits change the quality of every response you get — and they take less than a minute to apply.

The core principle: quality in, quality out

Claude reasons from what you give it. A vague task produces a generic answer. A specific task with context produces precise, correct output. The model does not change — your input does.

Better input = better output. Every time.

Four habits that work

Apply all four and the improvement is dramatic. Even one or two makes a noticeable difference from day one.

01🧩

Think in Features

Not products — individual features

  • Break every task into one specific, atomic feature
  • Each feature becomes a single clear request to Claude
  • Smaller scope = fewer assumptions = better code
One feature per prompt
02🎯

Be Specific

Describe exactly what you want

  • Name the files, libraries, and patterns to use
  • Describe the expected behavior, not just the goal
  • Include edge cases and error states up front
Clear input = clear output
03📎

Provide Context

Help Claude see what you see

  • Use @filename to pin the exact file to work on
  • Use CLAUDE.md to share conventions once for all sessions
  • Mention packages, patterns, or constraints already in place
Context eliminates guesswork
04🧪

Test As You Go

Catch problems before they compound

  • Verify each feature works before requesting the next one
  • Ask Claude to write a quick test alongside each feature
  • Never stack three unverified changes on top of each other
Build → Verify → Continue

See the Difference

Same task, two prompts. One leaves Claude to guess, the other gives it everything it needs.

Vague prompt
>add user authentication

Claude does not know which library to use, which files to touch, what the success state looks like, or how errors should appear to the user. It will make assumptions — and they may not match your project.

Specific prompt
>Add email/password login to @app/api/auth/route.ts. Use NextAuth.js (already in @package.json). On success redirect to /dashboard. Show a red error message under the form if credentials are wrong.

Claude knows the exact file, the library already in the project, the success path, and how errors should look. Nothing left to guess.

Rule of thumb: If you can imagine multiple reasonable ways to implement something, your prompt is not specific enough. Add one more sentence of detail.

Providing Context Efficiently

The fastest way to improve results is to reduce what Claude has to assume. Two tools do this automatically.

@filenamePin the exact file

Typing @ in the Claude panel opens a file picker. Claude reads the pinned file automatically — no copy-paste needed. Use Alt+K (Windows) or Option+K (Mac) with selected text to add the exact line range.

Terminal
$ claude
> Refactor the fetchUser function in @src/lib/api.ts
to return null instead of throwing on 404.
CLAUDE.mdProject conventions, once

A CLAUDE.md file at the root of your project is read automatically by Claude every session. Put your stack, naming conventions, folder structure, and common patterns there. You stop repeating context that Claude should always know.

# CLAUDE.md
Stack: Next.js 15 App Router, TypeScript, Tailwind
API routes: src/app/api/
Components: src/components/, always use 'use client' where needed
Tests: Vitest, colocated with source files
Avoid: any, console.log in production code
🧪

Test As You Go

Bugs compound quickly when you add new features on top of untested ones. A single verification step after each feature keeps the foundation solid.

1️⒣

Build the feature

Ask Claude to implement one specific thing. Accept its changes.

2️⒣

Ask Claude to test it

Prompt: "Write a test for this that covers the happy path and the error case." Run it.

3️⒣

Fix before continuing

If anything fails, resolve it here. Do not move to the next feature until this one works.

With auto-accept mode on: use checkpoints as your safety net. Let Claude run freely, then rewind to a working checkpoint if something breaks.
🔁

When Results Disappoint

Poor output is feedback about your prompt, not the model's capability. Here is how to course-correct quickly.

SymptomWhat to do
Output is generic or off-topicAdd more specificity — name the file, library, and expected behavior
Claude used the wrong libraryAdd a @package.json mention or note in CLAUDE.md which packages are in use
Changes touched files you didn't wantPin the exact file with @filename and say which files to leave alone
The feature works but breaks something elseAsk Claude to write a regression test, or rewind to the last checkpoint
Claude seems to have forgotten earlier contextStart a new conversation and re-attach the relevant file with @filename

Quick Reference

One feature per prompt

Smaller scope = better output

Name the file with @filename

No guessing which file to edit

Include existing packages

Prevents invented dependencies

Describe the error state

Edge cases get handled correctly

Write CLAUDE.md once

Conventions carry across sessions

Verify before continuing

Bugs caught early, not late

Think in features + Be specific + Provide context + Test as you go

= Consistently better results

The model is capable. If results are poor, it's almost always the input.

What's Next

You're getting better results on every run. The next lesson connects Claude Code to your external tools via MCP — GitHub, databases, APIs, and more.