Skip to main content

Deep Audit

FirePan's deep audit feature provides comprehensive, autonomous security analysis that goes far beyond pattern matching.

Overview

While Surface Scan quickly identifies potential issues, Deep Audit performs a thorough security analysis using our autonomous agent technology.

FeatureSurface ScanDeep Audit
Analysis depthPattern matching + AI verificationMulti-pass autonomous analysis
Time~2 secondsMinutes to hours
LLM calls5 (default)1000+
Use caseQuick triagePre-launch security
OutputRisk score + findingsComprehensive report with PoCs

How Deep Audit Works

Knowledge Graph Construction

The audit agent builds a knowledge graph of your contracts:

┌──────────────────────────────────────────────────────────┐
│ Knowledge Graph │
├──────────────────────────────────────────────────────────┤
│ Contracts ───────────────────────────────────────────── │
│ │ │
│ ├── Inheritance relationships │
│ ├── Interface implementations │
│ └── Library dependencies │
│ │
│ Functions ───────────────────────────────────────────── │
│ │ │
│ ├── Call graphs (internal + external) │
│ ├── State variable access patterns │
│ └── Modifier chains │
│ │
│ Data Flow ───────────────────────────────────────────── │
│ │ │
│ ├── User input → state changes │
│ ├── Cross-contract interactions │
│ └── Privilege escalation paths │
└──────────────────────────────────────────────────────────┘

Autonomous Analysis Agents

Multiple specialized agents analyze your code:

AgentFocus
StrategistHigh-level attack surface analysis
ScoutDeep code exploration and pattern recognition
ExploiterExploit hypothesis generation and validation
VerifierFinding confirmation and false positive elimination

Exploit Hypothesis Generation

The agent generates and tests hypotheses:

  1. Identify attack surfaces - External functions, privileged operations
  2. Generate hypotheses - "What if an attacker could..."
  3. Trace execution paths - Follow the code to validate/invalidate
  4. Build proof of concepts - Where feasible, create reproducible exploits

Running a Deep Audit

Via CLI

# Create a project
firepan project create myproject /path/to/contracts

# Build the knowledge graph
firepan graph build myproject --init --iterations 1

# Run the autonomous audit
firepan agent audit myproject

# Generate the report
firepan report myproject --format html

Via Platform

  1. Connect your repository in the dashboard
  2. Click "Run Deep Audit" on your project
  3. Monitor progress in real-time
  4. Download the comprehensive report

What Deep Audit Finds

Beyond Pattern Matching

Deep Audit catches issues that static patterns miss:

CategoryExamples
Logic bugsIncorrect state transitions, missing validations
Economic attacksPrice manipulation, flash loan exploits
Access controlPrivilege escalation, admin backdoors
Integration risksOracle manipulation, callback reentrancy
Upgrade risksStorage collisions, initialization issues

Report Contents

A deep audit report includes:

  • Executive Summary - High-level findings and risk assessment
  • Detailed Findings - Each issue with severity, impact, and remediation
  • Proof of Concepts - Reproducible exploit code where applicable
  • Recommended Invariants - Tests you should add to your suite
  • Code Quality Notes - Best practices and improvements

SaaS Platform Quotas

Deep audits are included in all platform tiers:

TierDeep Audits/Month
Starter1
Professional3
Enterprise10

Need more? Contact sales for custom quotas.

Boutique Audits

For critical launches requiring human oversight, our Boutique Audit service combines:

  • AI-powered deep analysis (1000+ LLM calls)
  • Human validation by senior auditors
  • Extended review periods
  • Direct communication channels
  • Fix verification reruns

Best Practices

When to Use Deep Audit

  • Pre-launch - Before deploying new contracts
  • Major upgrades - Significant code changes
  • Integration changes - New external dependencies
  • Periodic review - Quarterly security checks

Preparing for Audit

  1. Clean build - Ensure your contracts compile without errors
  2. Documentation - NatSpec comments help the AI understand intent
  3. Test coverage - Existing tests provide context for expected behavior
  4. Known issues - Document any accepted risks or intentional patterns

Next Steps