Skip to Content
DocumentationHow It WorksOverview

How It Works

QuantumVerifi uses a 7-engine pipeline where each analysis builds on the previous one. Tests run in isolated cloud sandboxes, failures are automatically healed, and every result trains a model specific to your codebase.

The Pipeline

When you submit a repository, URL, or API spec, Scout processes it through these engines:

Engine 1: Analysis

Scout clones your repository (shallow clone for speed), then detects:

  • Languages — 165+ languages supported via universal AST parsing
  • Frameworks — React, Express, Django, Spring, Rails, and hundreds more
  • Entry points — API routes, page components, service classes
  • Dependencies — Package manifests, lock files, version constraints

This context feeds into every subsequent engine.

Engine 2: Execution

The execution engine generates tests using AI and runs them in isolated cloud sandboxes — not on your machine, not in shared infrastructure. Each test run gets its own container with the correct language runtime, dependencies installed, and a clean filesystem.

Supported test types:

  • Unit and integration tests (Jest, Pytest, Go test, JUnit, and more)
  • End-to-end tests (Playwright, Cypress)
  • API contract tests (against live or mocked endpoints)
  • Performance tests (k6 load testing)

Engine 3: Self-Healing

When tests fail, Scout doesn’t just report the failure. It feeds the error back to the AI with full context — the test code, the error message, the stack trace — and regenerates the test. This cycle runs up to 3 times per test.

Common fixes applied automatically:

  • Missing imports and dependencies
  • Incorrect selectors or API endpoints
  • Timing issues in async tests
  • Type mismatches

Engine 4: Visual Intelligence

For URL-mode analyses, Scout uses V-JEPA 2 (Meta’s visual AI model) to understand your UI:

  • Detects page regions (headers, sidebars, forms, modals)
  • Identifies interactive elements and their importance
  • Generates visual embeddings for regression detection
  • Falls back to Claude Vision when V-JEPA is unavailable

Engine 5: Evidence

Every test execution produces a tamper-proof evidence chain:

  • Each event is SHA-256 hashed and linked to the previous event
  • Chain verification detects any modification after the fact
  • Supports compliance frameworks: SOC 2, ISO 27001, HIPAA

Evidence is stored as immutable artifacts and can be exported as audit reports.

Engine 6: Training

After each analysis, Scout harvests the results — what worked, what failed, what was healed — and uses this data to fine-tune a QLoRA adapter specific to your tenant. This means:

  • Your 50th analysis generates better tests than your 1st
  • The model learns your codebase patterns, naming conventions, and test styles
  • Training runs asynchronously and doesn’t slow down your analysis

Engine 7: Inference

When a trained adapter exists for your tenant, Scout routes generation requests through your custom model. This produces tests that are more aligned with your codebase from the first attempt, reducing self-healing cycles and improving pass rates.

Architecture Principles

  • Universal language support — Tree-sitter AST parsing handles 165+ languages without per-language configuration
  • Isolated execution — Every test runs in a fresh sandbox with no shared state
  • Durable workflows — Long-running analyses survive infrastructure restarts
  • Multi-provider AI — Automatic failover across LLM providers ensures availability
  • Per-tenant learning — Your data trains your model. No cross-tenant data sharing.