How it works

From market lines to a quant-grade workflow: ingestion → features → training → predictions → audit.

1
Data ingestion
Pull historical play-by-play, player usage, team context, and sportsbook line snapshots. Normalize IDs and timestamps so everything joins cleanly.
Output
Raw feeds → clean tables
Source
Field
Example
pbp
route_yards
18
usage
targets_l4
7
odds
dk_line
26.5
teams
pace_rank
9
2
Feature engineering
Transform raw stats into predictive signals: recent form, role proxies, matchup indicators, and game environment.
Output
Signals → feature matrix
Feature
Value
Note
target_share_l4
24%
recent role
air_yards_l4
82
usage depth
pace_adj
+1.8
env
cb_matchup
-0.6
coverage
3
Model training
Train models on past weeks/seasons with careful leakage controls. Optimize for calibration and stability, not hype.
Output
Train → validate → calibrate
Train / validate
Fold
#1
Fold
#2
Fold
#3
Calibration
stable
Leakage
blocked
4
Prediction + EV
Generate projections, implied probabilities, and EV % vs the current book line. Sort + filter to find high-signal lines fast.
Output
Projection vs line → EV %
Projection
31.2
Line
26.5
EV%
+18%
ModelMarket
5
Backtesting
Audit the workflow by filter and week: what would have happened historically under consistent rules.
Output
Filters → simulated return
Filter
Simulated picks
Simulation return
EV>30%
21
-10.1%
EV>60%
9
+6.2%
WR only
14
+2.8%
TE only
7
-1.9%
6
Iteration
Improve the pipeline over time by refining features, monitoring drift, and tightening evaluation.
Output
Ship → measure → iterate
Ship → measure → iterate
v0.9feature refresh
v1.0calibration pass
v1.1drift monitor
What you get on the Projections page
Projection
Expected receiving yards
Line snapshot
Current book line (and odds when available)
EV %
Expected value signal for screening
Most people want to know one thing: how to use this to evaluate lines quickly.
Read the FAQ