🏗️Claude Skills · Lesson 5

Build Skills from Scratch

Go beyond a single SKILL.md. A production-quality skill has a structured folder with supporting agents, reference docs, and executable scripts. This lesson walks through building one — file by file — using a code review skill as the example.

Full folder structureAll 6 subfoldersagents/ scripts/ references/Complete working example/skill-creator plugin

The full skill folder structure

This is the structure Anthropic uses for its own published skills (like skill-creator). Only SKILL.md is required — every other folder is optional and added only as your skill grows.

Mockup based on the real Anthropic skill-creator directory structure.

~/.claude/skills/skill-creator/
📦skill-creator/
📄SKILL.mdrequired# main instructions — always present
📁agents/optional# sub-agent skill files
📁assets/optional# images, data, media files
📁eval-viewer/optional# skill evaluation tooling
📁references/optional# extra .md docs and schemas
📁scripts/optional# executable Python/shell helpers
📄LICENSE.txtoptional# required if publishing publicly

What each subfolder is for

You don't need all of these for a simple skill. Add them one at a time as you hit a real need — don't pre-create empty folders.

🤖agents/

Place additional SKILL.md files here for sub-agents. Useful when your skill orchestrates multiple specialised steps — for example, one agent that extracts data and another that formats a report.

e.g. agents/extractor/SKILL.md, agents/formatter/SKILL.md
🖼️assets/

Store images, icons, CSV data files, or any binary resource your skill references. Claude can read these via bash and include them in generated artifacts.

e.g. assets/logo.png, assets/countries.csv
📊eval-viewer/

Evaluation tooling — scripts and config that let you run automated tests against your skill and view results. Particularly useful when iterating on skill quality.

e.g. eval-viewer/run_evals.py, eval-viewer/cases.json
📚references/

Extra Markdown files with deep documentation, API schemas, style guides, or large reference tables. Claude reads these only when SKILL.md tells it to — zero context cost otherwise.

e.g. references/api-schema.md, references/style-guide.md
⚙️scripts/

Executable Python, shell, or Node scripts that Claude runs via bash. Script code never enters the context window — only the output does. Great for deterministic operations like data validation or file conversion.

e.g. scripts/validate_input.py, scripts/convert_csv.sh
📜LICENSE.txt

Include a licence file if you plan to share or publish your skill. Anthropic uses MIT for their open-source skills. Not required for private or internal skills.

e.g. MIT License, Copyright 2026

Worked example: code-review-assistant

We'll build this skill folder step by step. Start small — SKILL.md only — then add subfolders when the need arises.

1

Create the folder and write SKILL.md

The skill is just this one file to begin with. The description field is the most important part — write the exact phrases a user would type when asking for a code review.

mkdir -p ~/.claude/skills/code-review-assistant
code-review-assistant/SKILL.md
---
name: code-review-assistant
description: >
  Reviews Python or JavaScript code for bugs, security issues,
  and style problems. Use when the user shares code and asks for
  a review, feedback, or asks to check the code.
---

# Code Review Assistant

## Goal
Identify real problems in the code — not cosmetic preferences.
Produce a structured review a developer can act on immediately.

## Output structure
1. **Summary** — one sentence on overall code quality.
2. **Issues found** — table: | # | Severity | Location | Issue | Fix |
3. **Positives** — 2-3 things done well (keep reviews balanced).
4. **Recommended next step** — single highest-priority action.

## Severity levels
- Critical — security flaw or data loss risk
- High     — likely to cause bugs in production
- Medium   — incorrect but non-critical logic
- Low      — style, naming, minor improvements

## Guidelines
- Reference the exact line or function name, not vague locations.
- Suggest a concrete fix, not just "this is wrong".
- If using a reference file, read references/patterns.md first.
- Keep the tone constructive, not critical of the author.
2

Add a references/ file for deep guidance

Move bulky reference material out of SKILL.md and into a separate file. Claude reads it only when SKILL.md says “read references/patterns.md first” — zero context cost otherwise.

code-review-assistant/references/patterns.md
# Secure Coding Patterns

## Input validation
Always validate and sanitise external input before use.
- Reject unexpected types early (type checks at boundaries).
- Use allow-lists, not block-lists, for string validation.

## SQL queries
Never interpolate user input directly into queries.
Prefer parameterised queries or an ORM.

## Secrets
Never log, print, or include secrets in error messages.
Use environment variables; never hardcode credentials.
3

Add a scripts/ helper for deterministic checks

Scripts let Claude run reliable, repeatable operations without consuming context for the code itself. Claude executes the script via bash and gets back only the output. Update SKILL.md to tell Claude when to call it.

code-review-assistant/scripts/validate_syntax.py
#!/usr/bin/env python3
"""
validate_syntax.py
Checks Python files for syntax errors before review.
Claude runs this via bash; only the output enters context.
"""
import ast, sys

def validate(path: str) -> None:
    with open(path) as f:
        source = f.read()
    try:
        ast.parse(source)
        print(f"OK: {path} — no syntax errors")
    except SyntaxError as e:
        print(f"ERROR: {path} — line {e.lineno}: {e.msg}")
        sys.exit(1)

if __name__ == "__main__":
    for p in sys.argv[1:]:
        validate(p)
ℹ️ Info: Add one line to SKILL.md's Guidelines: “Before reviewing, run python scripts/validate_syntax.py <file> and report any syntax errors first.”
4

Final folder layout

After completing the three steps above, the skill folder looks like this. Notice it stays lean — we only added the subfolders we actually needed.

code-review-assistant/
SKILL.mdrequired
references/
patterns.md
scripts/
validate_syntax.py

When to add each subfolder

SubfolderAdd it when…
agents/Your skill needs to hand off to a specialised sub-agent (e.g. one agent extracts, another formats)
assets/Your SKILL.md references an image, logo, CSV, or other binary file
eval-viewer/You want to run automated evals to measure and improve skill quality
references/Your SKILL.md is getting too long, or it points to deep docs Claude should load on demand
scripts/You need a deterministic operation (validation, conversion, calculation) that shouldn't consume context
LICENSE.txtYou plan to share or open-source the skill
💡 Tip: Start every skill as a single SKILL.md. Add subfolders only when you hit a concrete problem — instructions getting too long (add references/), needing reliable calculations (add scripts/), or orchestrating multiple agents (add agents/). A lean skill is easier to debug and maintain.

Three ways to build a skill

You don't have to write every line yourself. Beyond authoring SKILL.md by hand, you can ask Claude in a regular chat to draft it for you, or use the /skill-creator plugin for a full guided workflow — testing and iterating included.

Option 2: ask Claude in chat

Open any Claude chat and ask it to write the SKILL.md content for you. Paste the output into ~/.claude/skills/your-skill/SKILL.md and you're done.

claude.ai
Write a SKILL.md for a Claude skill called "daily-standup". It should help teams run a 5-minute standup — asking each person what they did yesterday, what they're doing today, and any blockers.
C

Here's your SKILL.md:

---
name: daily-standup
description: >
  Guides a team through a structured 5-minute standup.
  For each person, ask: (1) what did you do yesterday,
  (2) what are you doing today, (3) any blockers?
  Keep answers concise. End with a summary table.
---

Save this to ~/.claude/skills/daily-standup/SKILL.md and the skill is ready to use.

Option 3: use the /skill-creator plugin

The /skill-creator plugin is Anthropic-verified and available in Claude Code. It adds four dedicated modes — Create, Eval, Improve, and Benchmark — so you can write, test, and refine a skill without leaving your terminal.

🛠️
/skill-creator
Anthropic verified · 176,097 installs
Install plugin →

Skill Creator is a full development lifecycle toolkit. Once installed, invoke it with /skill-creator and choose a mode. Four composable agents work behind the scenes to write, run, score, and improve your skill automatically.

✏️
Create
Gathers requirements and writes SKILL.md for you through a short Q&A.
🧪
Eval
Runs your skill against test prompts and grades the output automatically.
📈
Improve
Analyses eval results and suggests targeted edits to improve performance.
📊
Benchmark
Runs the skill across multiple iterations and measures variance statistically.

Mockup of a /skill-creator Create session in Claude Code.

Claude Code · /skill-creator active
/skill-creator Create a new skill that reviews PRs for security issues
C

Starting Create mode — let me ask a few quick questions.

1. What languages or frameworks should this skill cover? (e.g. Python, TypeScript, all)

2. Should it flag only critical issues, or also medium/low severity?

3. Do you want a summary table, inline annotations, or both?

Python and TypeScript, all severities, summary table please.
C

Generating skill files…

✔ Created ~/.claude/skills/pr-security-review/
✔ Wrote SKILL.md (name, description, output structure)
✔ Wrote references/owasp-top10.md
✔ Wrote scripts/detect_secrets.py

Skill installed. Try it: /pr-security-review — or run /skill-creator Eval to test it.

Option 1
✍️Write SKILL.md by hand
  • Full control over every line
  • Best for simple, focused skills
  • No extra plugin or chat needed
  • Easy to version-control and diff
Option 2
💬Ask Claude in chat
  • Claude drafts SKILL.md from your description
  • Works in any Claude chat — no plugin required
  • Fast for simple skills
  • You still paste and save the file manually
Option 3
Use /skill-creator
  • Guided Q&A → generates all skill files
  • Built-in Eval, Improve, Benchmark modes
  • Creates reference and script files too
  • Best for complex, multi-file skills
💡 Tip: Not sure which approach to pick? Start with a regular Claude chat — describe what you want the skill to do and paste the generated SKILL.md into your skills folder. Once you need testing, iteration, or supporting files, switch to the /skill-creator plugin.

What's Next

You've built a skill. Now let's make sure you know all the ways to invoke and use it effectively in your workflow.