Table of Contents
You ask Claude Code, "Did you read CLAUDE.md?" It cheerfully replies, "Yes, I read it." A few turns later, none of the rules in that file are actually being followed — your project-specific commands, commit message conventions, deploy steps. All of it has quietly evaporated.
This is not a Claude Code-only problem. The same drift shows up in Cursor's .cursor/rules, GitHub Copilot's .github/copilot-instructions.md, and Codex CLI's AGENTS.md after you've used them long enough.
This article lays out the five root causes AI agents ignore your .md rule files, then walks through quick rewrite techniques and longer-term systemization with Hooks and sub-agents, all with concrete examples.
Why your rules get ignored
— and how to systemize the fix
1. Why AI ignores your rules — 5 root causes
1. Structural limits of the context window
An LLM does not pay equal attention to every token in its input. Instructions buried in the middle of a long document get lost — the well-documented "lost in the middle" effect. Once your CLAUDE.md crosses about 200 lines, anything in the middle effectively disappears.
2. Auto-compact in long sessions
Claude Code has a /compact feature that compresses conversation history. When it triggers automatically, system-prompt-level instructions survive but the operational details from CLAUDE.md get summarized away or dropped entirely. That is why rule violations cluster in the back half of long sessions.
3. Direct user instructions take priority
For an AI, "what the user just said" outweighs "a rule it read 300 turns ago." The moment a user says "just do it" or "go ahead and commit," your "always confirm before commit" rule in CLAUDE.md gets steamrolled.
4. Vague or contradictory rules
Subjective directives like "write things politely" or "handle this appropriately" leave the AI to interpret on its own, which rarely matches what you wanted. Rewrite them as verifiable constraints: "keep responses under 3 lines," "use chat.postMessage for the Slack API," and so on.
5. Bloated, scattered rule files
CLAUDE.md plus SPEC.md plus README.md plus a pile of other .md files — once they're spread out and individually long, the AI does not read them with equal weight. When the same rule appears in multiple files with subtle differences, the AI tends to pick the safest-looking version.
2. How to diagnose whether your rules are actually being followed
Start by checking the current state. Throw the following questions at the AI and watch the response:
| Question | What to look for |
|---|---|
| "List every rule in CLAUDE.md as bullet points." | Anything missing is something the AI is not aware of |
| "Before you write the next chunk of code, declare which CLAUDE.md rules you'll follow." | Rules it doesn't declare won't be applied |
| "In the last 5 turns, list anything you may have done that violated CLAUDE.md." | Whether the AI is even aware of its own violations changes your strategy |
The AI saying "I read it" or "I understand" tells you nothing. Whether it actually applies the rules is a separate question — judge by post-execution behavior, not by what it claims.
3. Quick fixes you can ship in 5 minutes
1. Compress your rule file to under 150 lines
In practice, Claude Code reliably follows a CLAUDE.md of roughly 100 to 150 lines. Beyond that, split it up:
- Non-negotiable rules (10 to 20 lines) — top of CLAUDE.md
- Service-specific specs — break out into SPEC-xxx.md
- History and background — move to a docs/ directory
Keep CLAUDE.md to "things that are catastrophic if forgotten." Detail can live in linked files.
2. Add explicit priority markers
Use loud priority keywords to force the AI's attention:
- CRITICAL — violation causes a production incident
- MUST — always follow
- SHOULD — follow by default
- NICE TO HAVE — only if there's room
A line like "CRITICAL: destructive queries against the production DB require explicit approval" is much harder to overlook, even buried in a long file.
3. Re-emphasize in the chat itself
At the start of a session, drop in a one-liner like "Before you start, declare the three most important rules from CLAUDE.md." That pulls those rules into the recent context and makes them stick for the rest of the session.
4. Log dangerous operations into TodoWrite
Use the TodoWrite tool of an AI agent (or the equivalent task-tracking feature in your tool) to include "rule check" as an explicit step. Visibility makes "I forgot" much rarer.
4. Long-term systemization — Hooks, sub-agents, slash commands
You can only push prose-based rules so far. Shifting from "ask the AI to follow rules" to "make the environment enforce them" is far more reliable.
1. Mechanically enforce with Claude Code Hooks
The Hooks feature in Claude Code lets you run arbitrary scripts before or after specific tool calls. You can build a setup where the system stops the AI even when the AI itself forgets the rule.
For example, with a PreToolUse hook:
- Detect dangerous commands (
rm -rf,git push --force) beforeBashruns and block them - Check file permissions or lock state before
Edittouches a target file - Run project-specific tests before commit and abort on failure
Asking nicely in prose is no contest against mechanical enforcement via Hooks.
2. Separate concerns with sub-agents
Use the sub-agent capabilities in the Claude Agent SDK or Cursor to build a dedicated "rule-audit agent." A two-stage flow where the main agent writes code and the audit agent reviews it catches rule violations even when the main agent forgets.
The sub-agent only needs a short audit-focused prompt, so its rule-recognition rate stays high. That's the key.
3. Codify routines as custom slash commands
Repeated workflows like "run these 3 checks before every commit" are a perfect fit for a slash command (/precommit, etc.). When the user types /precommit, the AI runs a fixed sequence. No need to re-read the rule file every time.
4. Detect violations with automated scripts
Run grep-based detection for forbidden patterns in CI or as a pre-commit hook. Examples:
- Stray
console.login production code - Hard-coded API keys
- Missing copyright headers
Don't rely on the AI — let machines find it. This is your last line of defense.
5. Best practices by tool
Rule-design tips per major AI agent
The universal rule is "short, specific, prioritized." Filenames and locations vary by tool, but the writing principles are the same.
6. Three rule-design anti-patterns
1. "Please follow best practices."
Whose best practices? The AI defaults to the average of its training data, which means you get the lowest common denominator. Project-specific quirks never make it in. Write concrete do's and don'ts instead.
2. The same rule duplicated across multiple files
If your "commit conventions" live in CLAUDE.md and SPEC.md and README.md, the three copies will subtly drift apart and confuse the AI. Pick one source of truth and link to it from everywhere else.
3. Stamping "ABSOLUTELY MUST" on everything
If everything is "absolute," the AI loses the ability to prioritize. Reserve "CRITICAL" for the genuinely catastrophic; let the rest sit in normal prose. Emphasis suffers from inflation — keep it scarce.
Summary
When AI agents ignore your .md rules, it is almost always due to structural constraints and rule-design problems, not laziness on the AI's part. Most cases are solved by quick fixes — compress to under 150 lines, add priority markers, re-emphasize in the chat. For the truly critical rules left over, move to mechanical enforcement: Hooks, sub-agents, slash commands, and automated detection scripts.
"If the rules aren't being followed, build a system that forces them" — the new common sense in operating AI agents.
FAQ
Q1. What's the ideal length for CLAUDE.md?
In practice, 100 to 150 lines. Past 200, split into SPEC-xxx.md or similar. Putting a "top 5 critical rules" summary at the very top is good insurance against forgetting.
Q2. Should I use Cursor's .cursorrules or .cursor/rules/*.mdc?
For new setups, .cursor/rules/*.mdc. One rule per file, with glob patterns to scope where each rule applies. The legacy single-file .cursorrules tends to bloat over time.
Q3. Do longer rules make the AI more strict?
The opposite. The longer the file, the more the middle gets diluted. Short, essential rules placed prominently (at the top) get followed at higher rates.
Q4. What if I use multiple AI tools (Claude Code + Cursor) on the same project?
The pragmatic answer is to put the same summary in each tool's config file and centralize the details in a shared SPEC.md. There is no fully unified solution yet (as of May 2026).
Q5. When the AI says "I read it," is it really not reading it?
The claim "I read it" and the actual runtime behavior are different things. Use the diagnostic methods in section 2 to verify execution-level recognition.