I Built an Open-Source Move Auditor Skill for Claude Code
3,000+ lines of vulnerability patterns for Sui & Aptos โ plugged directly into your AI workflow.
2 weeks ago I open-sourced my generic audit prompts and workflow. Those were chain-agnostic โ they worked for Solidity, Rust, Move, whatever. The feedback was great, but one thing kept nagging me: generic prompts produce generic results.
When I started doing more Move audits โ both Sui and Aptos โ I realized the attack surface is fundamentally different from EVM. Object ownership models, capability patterns, hot potatoes, witness abuse, PTB reentrancy, acquires annotations, resource account escalation. None of this exists in Solidity. My generic prompts weren't catching these.
So I built something specific.
1. What is move-auditor?
It's a Claude Code skill โ not a standalone tool, not a CLI binary. You install it once, and it turns Claude Code into a Move-specialized security auditor. It auto-activates when you open .move files.
That's it. No config files, no API keys, no setup beyond a cp.
2. Why a Skill Instead of Just Better Prompts?
My previous repo was a collection of prompts you copy-paste into your workflow. It worked. But there were problems:
- Context fragmentation โ you'd paste one prompt, get results, paste another, lose context from the first pass
- No structured workflow โ the order you run checks matters. Access control before DeFi vectors, codebase mapping before vulnerability scanning
- Generic coverage โ a prompt that says "check for reentrancy" doesn't know that Sui reentrancy looks nothing like EVM reentrancy
A Claude Code skill solves all three. It's a structured, multi-phase audit workflow that loads chain-specific patterns on demand and produces a consistent report format every time.
3. The 5-Phase Audit Workflow
The skill doesn't just dump a checklist. It runs a structured 5-phase process:
Phase 2 is the one I think matters most. Asking the AI to think from three perspectives โ attacker, designer, integrator โ catches things a single-pass review misses. An attacker thinks about what they can call. A designer thinks about what invariants should hold. An integrator thinks about what happens when this module composes with others.
4. What It Actually Checks
The skill ships with 3,000+ lines of security checks across four reference files. Here's the breakdown:
Common Move Checks (chain-agnostic)
These apply to all Move code โ Sui, Aptos, or any future Move chain:
- Access control โ missing capability gates, copy-ability on caps, hardcoded admin addresses
- Arithmetic โ overflow DoS, division-before-multiplication, cast truncation
- Resource safety โ leaks, unauthorized extraction, double-spend via phantom resources
- Logic invariants โ state machine violations, timestamp manipulation
- Cross-module safety โ reentrancy, unvalidated return values, stale state after external calls
- Upgradeability โ single-key authority, reinitialization, missing emergency pause
Sui-Specific Patterns (SUI-01 to SUI-22)
Sui's object-centric model creates an attack surface that doesn't exist anywhere else:
Object ownership confusion (SUI-01) โ a shared object treated as owned, or vice versa. Breaks access control silently.
Witness pattern abuse (SUI-03) โ one-time witness with copy ability. Supposed to be used once, now usable forever.
PTB state inconsistency (SUI-02) โ Programmable Transaction Blocks let you compose calls in a single transaction. Shared objects can be mutated between calls within the same PTB.
Dynamic field injection (SUI-06) โ attacker adds unexpected fields to objects they interact with.
Hot potato misuse (SUI-09) โ hot potato pattern enforces "must be consumed in same transaction." If the consumption check is weak, the enforced flow is bypassable.
Aptos-Specific Patterns (APT-01 to APT-21)
Aptos's global storage model has its own class of bugs:
Missing acquires (APT-01) โ forgetting the annotation means the function can't actually access the resource. Sounds harmless until it silently skips a critical check.
Resource account escalation (APT-02) โ the signer capability for a resource account gets leaked or stored with copy.
Coin type confusion (APT-03) โ generic type parameters let attackers pass a worthless coin type where a valuable one is expected.
ConstructorRef leaks (APT-17) โ if a ConstructorRef is stored or returned, anyone can mint unlimited tokens.
Move 2.2 reentrancy (APT-21) โ new dispatch patterns in Move 2.2 reintroduce reentrancy vectors that didn't exist before.
DeFi Attack Vectors (DEFI-01 to DEFI-10)
For any Move protocol touching tokens, swaps, lending, or oracles:
- Oracle manipulation (spot price, TWAP staleness, circular dependencies)
- Flash loan attack surface and composability risks
- First-depositor attacks on liquidity pools
- Liquidation mechanism abuse and reward calculation errors
- Governance/timelock bypass patterns
5. Every Check Has Vulnerable + Safe Code
This is a rule I set for the repo: no check gets added without a concrete code example showing both the vulnerable pattern and the fix. Abstract descriptions are useless when you're reviewing actual code.
Every check follows this pattern. You see the bug, you see the fix. No ambiguity.
6. The Report Format
Each finding in the output follows a consistent structure:
Severity table at the top, PoC scenario for every finding, verified clean checks at the bottom. Consistent every time.
7. The Roadmap โ and Why Benchmarking Matters Most
Here's what's planned:
- Vulnerability database โ real-world Move CVEs and audit contest findings, tagged by pattern
- Protocol-specific patterns โ Cetus, Aftermath, Turbos on Sui; Thala, Aries, Echelon on Aptos
- Automated grep patterns โ lightweight static analysis for common Move anti-patterns
- Report templates โ different formats for private audits vs. contest submissions
- Benchmarking โ the big one
On benchmarking: Right now, I know the skill finds things that generic prompts miss. But I can't prove it with numbers. The plan is to benchmark move-auditor against three baselines: (1) raw Claude with no skill, (2) Claude with my old generic prompts, and (3) manual review alone. Run all three against the same set of known-vulnerable Move contracts and measure: what gets caught, what gets missed, how many false positives.
If a tool can't beat "just reading the code carefully," it shouldn't exist. I want to know exactly where the skill adds value and where it doesn't. That data will also drive which checks get improved and which new ones get prioritized.
This is the item on the roadmap I'm most interested in. Open-sourcing code is easy. Proving it works is harder. The benchmarking results will be published alongside the skill so anyone can verify the claims.
8. What This Doesn't Replace
AI-assisted audit output must be manually verified. This skill accelerates your workflow โ it does not replace deep manual review and PoC testing. Every finding the skill generates needs human confirmation before it goes into a report. The skill is a force multiplier, not an autopilot.
If you treat AI output as final, you will ship false positives and miss real bugs. The skill is designed to surface candidates faster โ you still need to verify, test, and think.
9. How to Contribute
The repo is MIT licensed and contributions are welcome:
- New common checks go in
common-move.md - Sui-specific checks go in
sui-patterns.mdโ numberedSUI-XX - Aptos-specific checks go in
aptos-patterns.mdโ numberedAPT-XX - DeFi vectors go in
defi-vectors.mdโ numberedDEFI-XX
Every new check must include a code example showing both the vulnerable and safe pattern. No exceptions.
The skill is at github.com/pantheraudits/move-auditor. If you audit Move contracts, try it on your next review and tell me what it catches that you wouldn't have found otherwise. That's the only metric that matters.
โ Panther ยท GitHub ยท Telegram
Disclaimer: The Move language and the Sui/Aptos ecosystems evolve fast. APIs change, new patterns emerge, and security considerations shift with each upgrade. If you find anything in this post that is outdated or inaccurate, please reach out to me so I can update it.