Testing Your Skills
A skill that works once isn't reliable — it needs to produce the same quality output across different inputs. This lesson covers how to test slash command invocation, auto-detection, and output consistency, then keep a regression log so you notice regressions when you edit a skill.
Verify the skill triggers correctly — both by slash command and auto-detection.
Check that the format, length, and content match what you specified.
Record inputs, expected outputs, and results so regressions are visible.
Method 1 — “Try in chat” from the skills page
The fastest way to test a skill after creating or editing it is to use the Try in chat button in the claude.ai skills manager. It opens a fresh conversation with the skill pre-loaded so you can immediately type a test prompt.
/standup, but something like “I need my standup for today” — to test auto-detection at the same time.Method 2 — test in Claude Code
For skills in ~/.claude/skills/, start Claude Code in your project and run the slash command directly. Test three things in sequence.
Skill must activate immediately.
Skill should activate without the /.
Output must match your format spec.
cat ~/.claude/skills/code-review/SKILL.md to verify.What to check on every test run
Use this checklist for each test. If any row fails, the fix usually lives in the corresponding SKILL.md section.
Common failures and how to fix them
Keep a simple test log
Every time you edit a skill, re-run your previous test cases. A plain Markdown file stored alongside the skill is enough — no framework needed. Include the input, expected output, and result (✅ PASS / ⚠️ FAIL).
What's Next
Testing ensures your skills behave correctly. Next: learn how to pass arguments so a single skill can handle many different inputs.