Research Pipeline
Every strategy follows the same structured path from idea to deployment. Each stage produces measurable outputs, and each gate must pass before the strategy advances.
No shortcuts. No stages skipped. Every result is reproducible.
8 stages · Pass/fail gates at every transition · Full audit trail
Part of the Quanthop strategy research pipeline
Each stage produces outputs that feed downstream analysis. Gates prevent strategies from advancing until validation criteria are met.
Strategies are written in QSL (Quanthop Strategy Language) with a structured lifecycle: define parameters, initialise indicators, execute bar-by-bar logic. Version-controlled code ensures every research iteration is reproducible.
Output: Validated QSL strategy code with parameter schema
Gate: Code must parse and validate without errors
Initial performance evaluation against historical data. Produces core metrics including return, Sharpe ratio, drawdown, win rate, and expectancy. Establishes the performance baseline before any optimization.
Output: Backtest statistics, equity curve, trade log
Gate: Strategy must produce statistically meaningful trade count
Systematic exploration of the parameter search space. The engine evaluates every combination and scores neighbourhood stability, not just peak performance. Identifies robust parameter regions rather than isolated optima.
Output: Parameter surface, stability scores, cluster analysis
Gate: Stable region must exist with consistent performance
Parameter Optimization detailsExtracts contiguous parameter regions and scores their robustness. Evaluates performance degradation from optimal to neighbouring parameter sets. Identifies whether the strategy depends on precise tuning or survives perturbation.
Output: Stability score, region boundaries, degradation profile
Gate: Stability score must exceed minimum threshold
Tests whether the optimized parameters generalize across multiple assets. A strategy that works on one asset but collapses on correlated instruments is likely overfit to noise rather than capturing a durable pattern.
Output: Multi-asset performance matrix, consistency score
Gate: Performance must hold across minimum asset count
Validation pipeline detailsRolling out-of-sample validation with anchored or sliding windows. Each window re-optimizes parameters on in-sample data, then evaluates on unseen out-of-sample data. Walk-Forward Efficiency measures how much in-sample performance survives.
Output: OOS returns, WFE score, window-by-window breakdown
Gate: WFE and OOS Sharpe must meet defined thresholds
Strategies that pass all validation gates enter deployment readiness. The validated parameter set, research lineage, and performance envelope are preserved and carried forward into live monitoring.
Output: Deployment-ready configuration with validation provenance
Gate: All upstream gates must be passed
Continuous post-deployment validation using the same statistical framework applied during research. Monitors live performance against the walk-forward baseline, detects structural drift, and triggers controlled re-optimization when degradation is confirmed.
Output: Health score, deviation alerts, re-optimization triggers
Gate: Automated state transitions based on statistical thresholds
Continuous Validation detailsStage 01
Strategies are written in QSL with a structured lifecycle: define parameters, initialise indicators, execute bar-by-bar logic. The IDE validates code in real-time, tracks every version, and enforces a parameter schema that feeds directly into downstream optimization.
QSL provides:
Stage 02
Before any optimization, the strategy runs against historical data with default parameters. This baseline measures raw strategy behaviour and establishes whether the core logic has merit before parameter tuning begins.
Metrics are structured to prevent outcome-anchoring. Sharpe ratio, win rate, and expectancy are presented as primary measures. Total return is deliberately demoted to a footnote.
Baseline outputs:
Gate: Trade count must be statistically meaningful
Stages 03 – 04
The optimization engine evaluates every parameter combination and scores the neighbourhood around each set. Strategies that depend on precise tuning are flagged. Strategies with broad, stable performance regions advance.
Stability analysis extracts contiguous parameter regions and measures how performance degrades as parameters shift. This directly predicts how the strategy will behave when deployed with parameters that differ slightly from the tested optimum.
Stages 05 – 06
Optimized parameters are tested across correlated assets to verify generalization. Walk-forward analysis then evaluates performance on truly unseen data through rolling in-sample and out-of-sample windows.
Walk-Forward Efficiency (WFE) measures the ratio of out-of-sample to in-sample performance. A high WFE indicates that optimized parameters retain their edge when applied to new market conditions.
Stage 07
When a strategy passes all validation gates, the deployment package includes the validated parameter set, research lineage, and expected performance envelope. Nothing is lost between research and monitoring.
The readiness checklist verifies that upstream gates are satisfied before allowing transition to live monitoring. This prevents deployment of strategies that have not completed the full validation pipeline.
Deployment includes:
Stage 08
Adaptive Flow monitors deployed strategies using the same statistical framework applied during research. Live performance is continuously compared against the walk-forward baseline to detect structural drift early.
When degradation is confirmed, the system triggers a controlled re-optimization cycle. The new parameters replace the deployed set, and monitoring restarts. This creates a closed feedback loop between research and deployment.
A complete strategy research lifecycle
Every stage is designed to eliminate a specific class of failure. Strategies that reach deployment have survived parameter perturbation, cross-asset testing, out-of-sample evaluation, and continuous monitoring.
8 pipeline stages. Pass/fail gates at every transition. Full audit trail.
Start Research