Home

Methodology

How ScoutSelect computes alliance scores, OPR, and win probabilities.

Algorithm Pipeline Overview

Overview

ScoutSelect processes live FTCScout data through a five-stage pipeline: 1. Fetch – pull match scores, rankings, and team rosters from the FTCScout GraphQL API. 2. Compute – calculate OPR, team metrics (avg, IQR, trend), and synergy fingerprints. 3. Simulate – run a greedy snake-draft to model pick availability. 4. Project – optimise pick lists and pitch rankings per team. 5. Forecast – Monte Carlo win-probability across projected alliance matchups. All stages run client-side in the browser with no server state; refreshing re-runs the full pipeline.

Event Phase Detection

Phase

The dashboard automatically detects which phase an event is in and shows the right UI for it. The phase is inferred from match counts alone — no manual input required. A manual override dropdown lets scouters force a phase if the API lags.

if totalMatches == 0          → "upcoming"
if playoffMatches > 0:
  if playoffMatches >= 13     → "complete"
  else                        → "playoffs_running"
if qualMatches > 0 and playoffMatches == 0:
  if qualMatches >= total×0.95 → "alliance_selection"
  else                        → "quals_running"

Alliance Role Detection

Role

Each team is classified as captain, borderline, or picked based on their qualification rank and the estimated number of alliances for the event. Borderline teams receive both the captain and picked views so they can prepare for either scenario. Number of alliances scales with total team count.

numAlliances: ≤8 teams → 2, ≤12 → 3, ≤20 → 4, ≤32 → 6, else → 8

if rank ≤ numAlliances         → "captain"
if rank ≤ numAlliances + 2     → "borderline"
else                           → "picked"

Offensive Power Rating (OPR)

OPR

OPR attributes each team an individual contribution to alliance scores by solving the over-determined linear system A·x = b via Gaussian elimination with partial pivoting. Each row of A is one alliance in one match; each column is a team (1 if playing, 0 otherwise). b is the alliance's score. Solved independently for total, auto, teleop, and endgame to produce per-phase OPRs.

A[alliance_i][team_j] = 1  if team j played for alliance i
b[alliance_i]           = alliance score in that match

Solve: A^T·A · x = A^T·b   (normal equations)
Result x[j] = team j's OPR contribution

Bayesian Shrinkage

Stats

Teams with fewer than 5 qualification matches produce unreliable OPR estimates. A Bayesian shrinkage weight pulls those estimates toward the event median, reducing over-confidence on tiny samples. Teams with ≥5 matches use their full observed OPR.

α = min(matchCount, 5) / 5
smoothed = α × observed_OPR + (1 − α) × event_median

Consistency & Reliability

Metrics

Consistency measures score stability using the inter-quartile range (IQR) of a team's match scores — a tighter IQR means more predictable performance. Reliability is a composite weighting consistency (60%) and match volume (40%), so teams with few matches are penalised even if their scores look consistent.

consistency = 100 − (IQR / 400) × 100
reliability = 0.6 × consistency
            + 0.4 × clamp(matchCount / 5, 0, 1) × 100

Synergy Score

Synergy

Each team is encoded as a normalised role fingerprint in (auto, teleop, endgame) space. Two teams with different dominant phases are complementary (high Euclidean distance → positive bonus). Two teams that both dominate the same phase create an overlap penalty. The net synergy score is added to projected alliance strength.

fingerprint = (avgAuto/avgTotal, avgDc/avgTotal, avgEndgame/avgTotal)

dist            = Euclidean distance between two fingerprints
complementarity = (dist / √2) × 60          // max +60
overlapPenalty  = min(fp_a[phase], fp_b[phase]) × 20  // max −20

synergy = complementarity − overlapPenalty

Snake-Draft Simulation

Draft

To model which teams will still be available when you pick, ScoutSelect simulates a greedy snake draft. Each captain, in rank order, picks the available team that maximises their projected 2-team alliance strength. The draft reverses direction for pick 2 (round 2 picks in reverse rank order). This produces a draft-adjusted availability flag for each candidate pick.

Round 1 (pick 1): captain 1 → captain 2 → … → captain N
Round 2 (pick 2): captain N → captain N-1 → … → captain 1

Each captain greedily picks: argmax_t allianceStrength(captain, t)
Output: available_r1[team], available_r2[team]

Alliance Strength

Strength

Alliance strength combines the sum of team OPRs with a pairwise synergy bonus averaged across all team pairs in the alliance. The synergy bonus is scaled by 0.4 to prevent it from overwhelming the raw OPR signal.

totalOPR = sum of OPRs for all teams in alliance

for each pair (i, j):
  bonus += synergy(i,j).complementarity − synergy(i,j).overlapPenalty

pairsCount = N × (N−1) / 2
allianceStrength = totalOPR + (bonus / pairsCount) × 0.4

Picklist Generation & Modes

Picks

For a captain, ScoutSelect evaluates every available team as a potential pick 1, then for each simulates the best pick 2 from the remaining pool, producing a projected 3-team alliance strength. Results are ranked by that projected strength and annotated with availability from the draft simulation. Four ranking modes shift the weighting: • Safe — prioritises reliability and consistency. • Balanced — equal weight on OPR, synergy, and reliability. • Ceiling — maximises peak (high-score) potential. • Counter — maximises win probability against the projected #1 alliance.

score(pick1) = allianceStrength(captain, pick1, best_pick2)

Modes adjust weights on: opr, synergy, reliability, highScore
Draft simulation annotates: "Available R1", "Likely gone", etc.

Pitch Strategy Engine

Pitch

For teams in the picked pool, ScoutSelect ranks every captain by how much you improve their alliance. The improvement delta is the difference in projected alliance strength with and without you as pick 1. Captains where you provide the largest delta are most likely to want you. Talking points and red flags are generated automatically from your metrics.

delta(captain) = allianceStrength(captain, you)
               − allianceStrength(captain, their_best_alternative)

Ranked descending by delta → priority approach order

talkingPoints generated from: avgAuto, avgEndgame, reliability,
                               synergy.complementarity, trend
redFlags generated from:      matchCount < 5, consistency < 50,
                               avgEndgame < 5

Monte Carlo Win Probability

Monte Carlo

Win probabilities are computed by running 2,000 simulated matches against each projected opponent alliance. In each simulation, every team's contribution is sampled from a normal distribution N(μ, σ) using the Box-Muller transform (μ = OPR, σ = score standard deviation). Alliance scores are summed and the winner tallied. The final win probability is the fraction of simulations won.

for each simulation (n = 2,000):
  for each team t in alliance:
    u1, u2 ~ Uniform(0,1)
    z = √(−2 ln u1) · cos(2π u2)   // Box-Muller
    score_t = μ_t + σ_t · z

  red_score  = sum(scores for red teams)
  blue_score = sum(scores for blue teams)
  tally winner

P(win) = wins / 2000