Skip to Content
DocumentationTraining EngineTraining Engine

Training Engine

The Training Engine is what makes QuantumVerifi improve over time. After each analysis, Scout harvests what worked and what didn’t, then uses that data to fine-tune a model specific to your organisation.

Available on Scale and Enterprise plans.

How It Works

  1. Harvest — After each analysis, successful test patterns, self-healing corrections, and validation outcomes are collected into a training dataset
  2. Train — When enough data accumulates, a QLoRA adapter is fine-tuned on a base code model (e.g., Qwen2.5-Coder-7B)
  3. Activate — The trained adapter is activated for your tenant, routing future generation requests through your custom model
  4. Infer — New analyses use your adapter for context-aware test generation

What Improves

With a trained adapter, Scout generates tests that:

  • Match your codebase’s naming conventions and test patterns
  • Use the correct imports and framework idioms from the start
  • Require fewer self-healing cycles (tests pass on first attempt more often)
  • Cover edge cases specific to your application domain

Training Dashboard

The training dashboard at Settings > Training shows:

  • Active adapters — which operations are using your custom model
  • Training jobs — status, metrics, and hyperparameters for each training run
  • Metrics — routing statistics showing how often your adapter is used

Adapter Operations

Each adapter can be activated for specific operations:

OperationWhat it generates
GenerateTestsUnit and integration tests
FixTestsSelf-healing corrections
GeneratePageObjectsPage object models for E2E tests
GenerateAPITestsAPI contract test suites

You can activate an adapter for one or more operations, and deactivate it at any time.

Training Requirements

  • Minimum data: At least 5 completed analyses for meaningful training
  • Compute: Training runs on GPU infrastructure (managed in cloud, or your own GPUs for self-hosted)
  • Duration: A typical training run takes 10-30 minutes depending on dataset size
  • Storage: Trained adapters are stored as compact LoRA weights (typically 50-200 MB)

What’s Next