How to Audit Aave V3 Fork Deployment Scripts — The Unsexy Attack Surface

The smart contracts can be 99% flawless. The deployment scripts can still wire everything wrong. This post covers what to look for when auditing Aave V3 fork deployment scripts — the bug classes, the patterns, and the real findings from production audits.

✍ 0xTheBlackPanther 📅 Mar 2026 ⏱ 12 min read 🏷 Aave, Deployment, Security, DeFi

Why Deployment Scripts Matter

Most security audits focus on the smart contracts themselves — the Pool, the PoolConfigurator, the Oracle, the liquidation logic. But Aave V3 forks don't just deploy contracts. They wire an entire financial system together through a chain of deployment scripts that decide who owns what, which oracle feeds prices, what risk parameters are active, and whether the market actually functions.

A deployment script bug doesn't corrupt the contract's bytecode. It corrupts the configuration state — and the result is a protocol that deploys successfully, passes basic smoke tests, and silently carries a misconfiguration that breaks under real-world conditions.

This post distills the patterns that actually find bugs in deployment script audits into a practical checklist you can apply to any Aave V3 fork.

What makes deployment script audits different: 1. No on-chain exploit needed — Misconfig is the vulnerability itself
2. Artifacts are evidence — Checked-in deployment JSONs prove what was actually deployed
3. The blast radius is the entire protocol — One wrong address wires the whole market incorrectly
4. Bugs compound silently — A misconfigured oracle doesn't crash at deploy time, it crashes at first borrow
5. Most forks inherit upstream assumptions — The Aave V3 deploy repo was built for upstream networks and markets, not your custom chain

Part 1 — Understanding the Aave V3 Deployment Architecture

The Deployment Flow

An Aave V3 deployment isn't a single script. It's an ordered pipeline of 20+ scripts managed by hardhat-deploy, each tagged with dependencies. Understanding the flow is prerequisite to auditing it.

// Aave V3 deployment pipeline (simplified) // Phase 0: Core infrastructure
00_markets_registry        // PoolAddressesProviderRegistry

// Phase 1: Periphery (pre-market)
00_token_setup             // Test tokens (skipped if live)
01_price_aggregators       // Mock oracles (skipped if live)
01_treasury                // Treasury proxy + controller

// Phase 2: Market core
00_setup_addresses_provider // The central registry
01_pool_implementation     // Pool logic
02_pool_configurator       // Admin logic
03_init_acl                // Access control roles
04_deploy_oracles          // AaveOracle + price sources
06_init_pool               // Pool proxy setup
07_incentives              // Rewards controller
08_tokens_implementations  // AToken, DebtToken impls
09_init_reserves           // Wire everything together

// Phase 3: Post-deploy
01-after-deploy            // E-modes, caps, ownership transfer

Each script has a func.id that makes it idempotent — it won't re-run if the ID matches a previous execution. Each script declares dependencies via func.dependencies. And each script may conditionally skip itself based on market configuration.

Audit insight: The conditional skipping is where most bugs live. Script A skips based on one condition. Script B downstream assumes Script A ran. The conditions don't always agree.

The Classification System

Aave V3's deployment framework classifies networks and markets using two flags that control which code paths execute:

NETWORK-LEVEL FLAG hre.config.networks[network].live

Set in hardhat config per network
Controls: oracle source selection, reserve asset resolution

live=true → use config addresses
live=false → deploy mocks
MARKET-LEVEL FLAG poolConfig.TestnetMarket

Set in market config (index.ts)
Controls: mock deployment, pool unpausing

TestnetMarket=true → deploy test infra
TestnetMarket=unset → production path

The critical function that combines these:

// helpers/market-config-helpers.ts export const isProductionMarket = (poolConfig): boolean => {
  const network = process.env.FORK ? process.env.FORK : hre.network.name;
  return hre.config.networks[network]?.live && !poolConfig.TestnetMarket;
};

When a fork sets live: true for a testnet (because they want to use pre-deployed tokens) but doesn't set TestnetMarket: true (because they forgot, or it wasn't clear they needed to), isProductionMarket() returns true. The deployment skips mock oracles, skips test token deployment, and tries to use config addresses that may be placeholders or otherwise unsuitable for that environment.

Pattern: Every Aave V3 fork I've audited has at least one classification mismatch. The binary live/testnet model doesn't map well to real-world scenarios where you have "real tokens on a testnet" or "production contracts on a custom chain with no Chainlink feeds yet."


Part 2 — The Bug Classes

Bug Class 1: Conditional Deploy, Unconditional Assume

This is the most common and most impactful deployment script bug. One script conditionally skips deploying a contract. A downstream script unconditionally tries to load that contract.

// deploy/03_periphery_post/04_paraswap_adapters.ts const paraswapRegistry = getParamPerNetwork(poolConfig.ParaswapRegistry, network);

if (!paraswapRegistry) {
  console.log("[WARNING] Skipping ParaSwap adapters...");
  return; // ✓ Correctly skips
}

await deploy("ParaSwapLiquiditySwapAdapter", { ... });
await deploy("ParaSwapRepayAdapter", { ... });
await deploy("ParaSwapWithdrawSwapAdapter", { ... });
// tasks/misc/transfer-protocol-ownership.ts // No guard. No try/catch. Assumes adapters exist.
const paraswapSwapAdapter = await getOwnableContract(
  await (
    await hre.deployments.get("ParaSwapLiquiditySwapAdapter") // ← CRASHES
  ).address
);

When TRANSFER_OWNERSHIP=true is set in the post-deploy flow, the ownership transfer script runs and immediately crashes when it tries to load adapters that were never deployed. The ownership handoff fails mid-execution. Depending on which transfers completed before the crash, the deployer EOA may retain admin roles that were supposed to be handed off.

How to find this: For every deployments.get() call, trace backward to the script that deploys that contract. Check if that script has any conditional skip logic. If it does, verify that the consumer has a matching guard. This is a mechanical check — grep for deployments.get and cross-reference with deployment scripts.

Bug Class 2: Ownership Transfer Gaps

I was reading the HypurrFi deployment audit, and they found two High-severity ownership bugs: H-01 (usdxlToken ownership not transferred to admin) and H-02 (CapAutomator ownership not transferred). This is the most impactful class of deployment bug — the deployer EOA retains permanent privileged access to critical contracts.

The ownership transfer checklist: 1. List every Ownable contract deployed by the scripts
2. For each, check if transferOwnership() is called
3. Check if transfer-protocol-ownership.ts covers it
4. Verify the target address is the multisig, not the deployer
5. Check that the transfer actually executes (not skipped by a crash)

In a standard Aave V3 deployment, the contracts that need ownership transfer include: PoolAddressesProvider, PoolAddressesProviderRegistry, WrappedTokenGatewayV3, EmissionManager, and any custom contracts the fork adds. The transfer-protocol-ownership.ts task is supposed to handle all of them — but it only handles what's explicitly coded into it.

What to check: If the fork adds ANY custom contract that inherits Ownable (a cap automator, a custom oracle wrapper, a fee collector), verify that the ownership transfer task knows about it. Custom contracts are the most common source of ownership gaps because they're added after the upstream transfer task was written.

Bug Class 3: Proxy vs Implementation Address Confusion

Aave V3 uses the proxy pattern extensively. Every core contract (Pool, PoolConfigurator, AToken, DebtTokens) has both an implementation and a proxy. Using the wrong one in a configuration call is a subtle but devastating bug.

HypurrFi's H-03 was exactly this: getter functions named _getUsdxlATokenProxy() and _getUsdxlVariableDebtTokenProxy() returned the implementation address instead of the proxy. This cascaded into four downstream misconfigurations — incorrect token addresses in pool reserves, incorrect facilitator configs, and incorrect discount token settings.

// The HypurrFi H-03 pattern — function name says proxy, returns impl function _getUsdxlATokenProxy() internal view returns (address) {
  return address(usdxlAToken); // ← Returns IMPLEMENTATION, not proxy
}

// In Aave V3 hardhat-deploy, the equivalent pattern to check:
const aTokenImpl = await deployments.get("AToken-Implementation");
// vs
const aTokenProxy = reserveData.aTokenAddress; // from pool.getReserveData()
// These are DIFFERENT addresses. Using one where the other is expected = bug.

How to find this in Aave V3 forks: Search for every place where token implementation addresses (from deployments.get(ATOKEN_IMPL_ID)) and proxy addresses (from pool.getReserveData()) are used. Verify each usage expects the correct type. Init functions take implementations. Configuration functions typically need proxies. Etherscan verification needs to match what was actually deployed.

Bug Class 4: Double Deployments

HypurrFi had three findings about contracts deployed twice: M-01 (ProxyAdmin deployed redundantly, breaking upgrade paths), L-01 (InterestRateStrategy deployed twice), and L-08 (AToken and VariableDebtToken deployed twice). The standard Aave V3 hardhat-deploy framework prevents this through func.id deduplication, but custom deployment scripts (especially Foundry-based ones) don't have this safeguard.

The ProxyAdmin double-deploy is the worst variant (M-01): If you deploy a ProxyAdmin, then call a factory function that deploys another ProxyAdmin internally, the second ProxyAdmin becomes the actual admin of the proxy. Your first ProxyAdmin — the one you think controls upgrades — has no power. You've lost upgrade authority to a contract nobody tracks.

Bug Class 5: Non-Atomic Initialization

HypurrFi M-03 found that pool reserves were initialized in one script and supplied with initial liquidity in a different script. This gap can create a window for the classic first-depositor inflation attack — an attacker front-runs the initial supply and manipulates the exchange rate.

SAFER (WELL-GUARDED FLOW) User-facing actions blocked during deployment

Reserves initialized before public usage

Critical config applied before opening access

Pool or reserves unpaused only after setup

No practical window for front-running
UNSAFE (Custom scripts) Pool initialized without pause

Reserves init'd in script A

Initial supply in script B

Gap between A and B = attack window

First depositor can inflate exchange rate

Many Aave V3 fork repos aim to handle this by deferring public usage until the end of the deployment flow, often with an explicit unpause step in post-deploy. But you should not assume the protection exists just because the repo resembles upstream Aave. Verify where pause state is actually enforced, when it is lifted, and whether any user-facing operations are possible between reserve initialization and initial liquidity supply.

Bug Class 6: Uninitialized Proxies

HypurrFi L-04 and L-05 found deployed proxy contracts that were never initialized. An uninitialized proxy has an open initialize() path that can be called with arbitrary parameters — potentially setting privileged roles, pointing the proxy at a malicious implementation, or configuring it with unsafe values.

// The pattern: deploy proxy, forget to initialize // Step 1: Deploy implementation
usdxlVariableDebtToken = new UsdxlVariableDebtToken(pool);

// Step 2: Deploy proxy pointing to implementation
proxy = new TransparentUpgradeableProxy(usdxlVariableDebtToken, proxyAdmin, "");

// Step 3: ... missing! No initialize() call!
// Anyone can now call proxy.initialize() with arbitrary params

How to check: For every proxy deployment, verify that initialize() is called in the same script or immediately after. In the standard Aave V3 flow, token implementations are pre-initialized defensively, while the reserve token proxies are created and initialized later during initReserves(). Check both layers for any custom proxies the fork adds.


Part 3 — The Audit Checklist

After auditing multiple Aave V3 fork deployment repos, here's the systematic checklist I use. Each item maps to a real bug class.

  1. Map the classification logic. Trace how isProductionMarket(), isTestnetMarket(), hre.config.networks[network].live, and poolConfig.TestnetMarket interact for your target network. Verify they produce the intended code path. Does the fork want mock oracles or real ones? Does the classification agree?
  2. Verify every deployments.get() has a matching deploy. Grep for all deployments.get("ContractName") calls. For each, find the script that deploys it. If that script has conditional skip logic (if (!config) return;), verify the consumer handles the missing artifact. This catches the conditional-deploy/unconditional-assume bug class.
  3. Enumerate all Ownable contracts and trace their ownership. List every contract deployed with Ownable or that takes an owner constructor parameter. Verify that transfer-protocol-ownership.ts covers each one. Check that the transfer task can actually complete without crashing (see Bug Class 1).
  4. Cross-reference deployment artifacts. If the repo includes checked-in deployment artifacts (e.g., deployments/network-name/*.json), read the args field. Verify that addresses passed as constructor arguments actually match the deployed addresses of the contracts they're supposed to reference. This catches wiring bugs that no code review alone will find.
  5. Check oracle deployment for zero/placeholder addresses. Read the ChainlinkAggregator entries in the market config. If any are ZERO_ADDRESS, verify that the deployment flow will deploy mock aggregators instead. If it doesn't (because isProductionMarket() returns true), the AaveOracle may be deployed with unusable price sources. The deployment can still succeed, but price reads may fall through to a missing or unusable fallback oracle and fail at runtime.
  6. Verify proxy initialization completeness. For every proxy deployment (TransparentUpgradeableProxy, InitializableImmutableAdminUpgradeabilityProxy), verify that initialize() is called. Check both the proxy itself and its implementation. Uninitialized proxies are front-runnable.
  7. Check reserve asset addresses exist on the target chain. The market config specifies ReserveAssets per network. If the fork is deploying to a custom chain, verify these token addresses actually exist as deployed contracts on that chain. A config pointing to addresses from mainnet Ethereum on a custom L2 will deploy without errors but produce a non-functional market.
  8. Audit the post-deploy flow end-to-end. Read 01-after-deploy.ts from top to bottom. Every hre.run("task-name") call is a potential failure point. Check each task for assumptions about deployed contracts, config values, and on-chain state. The post-deploy flow is where E-Mode setup, debt ceilings, isolation mode, liquidation fees, and ownership transfer happen — any crash here leaves the market partially configured.

Part 4 — What's NOT a Bug

Deployment script audits generate a lot of false positives. Knowing what to exclude is as important as knowing what to flag. Here's what I've learned to skip:

Placeholder Configs Are Not Bugs

If a market config has ZERO_ADDRESS for oracle feeds on a testnet, that's usually a TODO, not a vulnerability. Six explicit zero addresses are self-evidently placeholders. The developer knows. Unless the deployment artifact shows these were actually deployed to a live market, it's informational at best.

Inherited Upstream Config Is Not a Bug

Every Aave V3 fork inherits network configs for Ethereum mainnet, Polygon, Arbitrum, etc. from the upstream repo. If the fork changed admin addresses to their own, that's not "contamination" — that's what forking means. Stale RPC URLs for deprecated testnets (Rinkeby, Kovan, Goerli) in unused network configs are not a finding unless someone actually deploys to those networks.

Init-Time Helpers Are Not Reconciliation Tools

Functions like configureReservesByHelper() skip reserves that are already enabled as collateral. This is intentional idempotency, not a "stale config" bug. Similarly, setup-debt-ceiling only pushes non-zero values because the on-chain default is already zero. These are one-time setup scripts, not ongoing sync mechanisms. Parameter changes after deployment go through governance, not by re-running init scripts.

Risk Parameter Choices Are Not Bugs

If one asset has borrowCap == supplyCap while others use borrowCap = supplyCap / 2, that's a risk parameter decision. If the flash loan fee split is 80/20 protocol-to-LP, that's an economic choice. Unless a parameter creates a concrete vulnerability (e.g., a collateral factor above 100% enabling risk-free borrowing), it's not a security finding. Don't second-guess intentional tuning in an audit report.

The litmus test: Ask yourself — "Does this require the developer to have made an error, or just a choice I disagree with?" Errors are bugs. Choices are not.


Part 5 — Quick-Start Grep Commands

When starting a deployment script audit, these searches give you the fastest signal:

// Find all conditional skips in deployment grep -rn "return;" deploy/ --include="*.ts" -B5 | grep -i "skip\|warning\|notice"

// Find all deployments.get() calls (potential crash points) grep -rn "deployments.get(" tasks/ helpers/ --include="*.ts"

// Find all ownership transfer calls grep -rn "transferOwnership\|transferAdmin" . --include="*.ts"

// Find all ZERO_ADDRESS usage in market configs grep -rn "ZERO_ADDRESS\|0x0000000" markets/ --include="*.ts"

// Find all proxy deployments and check for initialize calls grep -rn "Proxy\|proxy" deploy/ --include="*.ts" | grep -i "deploy\|new"

// Find the classification logic grep -rn "isProductionMarket\|isTestnetMarket\|\.live\|TestnetMarket" . --include="*.ts"

// Find all deploy() calls to build the full contract inventory grep -rn "await deploy(" deploy/ --include="*.ts"

Final Thoughts

Deployment scripts are the most under-audited part of DeFi. The smart contracts get multiple reviews. The deployment scripts that wire them together? Often zero.

The bugs in this space aren't sophisticated. They're not reentrancy or flash loan exploits. They're wrong addresses, missing ownership transfers, conditional logic that doesn't agree across scripts, and config flags that don't match reality. They're wiring bugs — and they're exactly the kind of thing that gets missed because everyone assumes the deployment "just works."

If you're auditing an Aave V3 fork, don't stop at the contracts. Read the deployment scripts. Cross-reference the artifacts. Check who owns what after the deployment completes. The most dangerous state is a protocol that appears to be deployed correctly but has a silent misconfiguration waiting to surface at the worst possible moment.

The contracts are the engine. The deployment scripts are the assembly line. A perfect engine, assembled wrong, doesn't run.

Context: Learnings from auditing multiple Aave V3 fork deployment repositories
Public reference: HypurrFi Security Review (Pashov Audit Group, Feb 2025)
Key bug classes: Conditional deploy/unconditional assume, ownership gaps, proxy/impl confusion, non-atomic init, uninitialized proxies
Framework: Aave V3 hardhat-deploy pipeline (20+ ordered scripts)
Follow: @thepantherplus

📝 Aave V3 deployment patterns evolve with each fork. If any tip here is outdated or you've found additional deployment bug classes, please reach out on X so I can update this article.