Methodology · v1.1

How Falcoscan scores the AI tool market.

Every score the platform publishes — Heat, Opportunity, Saturation, Wrapper Risk, Growth Signal, Rising 10 — defined in one place. Updated nightly against 6,481 approved AI tools. Citable anywhere.

The Pulse Heat Score

Every tool's Heat Score is a weighted sum of five inputs. Each input is rescaled to 0–100 before the weights are applied, so a score of 100 would require a tool to lead on every dimension simultaneously. In practice the top of the distribution sits in the high 80s. This is the ranking signal behind the weekly Pulse issues.

heat_score =
    0.30 × growth_signal_weight
  + 0.25 × trend_direction_weight
  + 0.20 × opportunity_score
  + 0.15 × recency_boost
  + 0.10 × rating_normalized

trend_direction_weight

Three-state direction of category/tool movement: up → 100, flat → 50, down → 10.

recency_boost

Exponential decay with a 14-day half-life against the most recent observation of the tool. A tool observed today scores 100; one observed 14 days ago scores 37; one observed 30 days ago scores 12. Tools marked dead score 0 on this term.

recency_boost = 100 × exp(-days_since_observed / 14)

rating_normalized

User rating (0–5) rescaled to 0–100. The smallest weight in the composite because ratings are noisy at low volume; treated as a tie-breaker, not a primary signal.

Growth signal

Categorical momentum label maintained by the Falcoscan editorial data layer. Four states: hot → 100, rising → 70, stable → 40, declining → 10. This label is the heaviest single input into Heat Score (30%) and the largest weight in the Rising 10 composite (40%). It reflects real-time editorial judgment synthesized from launch velocity, funding activity, search volume, and qualitative review — not a pure historical metric.

Opportunity score

A 0–100 estimate of addressable opportunity for a tool or category, assigned by the Falcoscan opportunity model. It considers addressable-market signals, competitive saturation, and differentiation. High opportunity plus low saturation is the “sweet spot” corner of the Niches chart on every Pulse issue.

Saturation score

A 0–100 estimate of how crowded a category is. Higher means more competitors, more commoditization pressure, less room for a new entrant. Low saturation on top of high opportunity is the pattern Falcoscan flags as a buildable niche. Published per tool (inherited from its category) and per category directly.

Wrapper risk score

A 0–100 estimate of commodity risk — the probability a tool is a thin layer over someone else's model and will be absorbed by the platform it wraps. High wrapper risk is the primary signal for the Risk Watch section of every Pulse issue. Shown per-tool on browse cards and in category deep-dives.

Category Heat

The Heat Index and category-level heat values published in each Pulse issue are aggregates of tool Heat Scores combined with category-level properties — launch velocity, wrapper share, average opportunity, and average saturation — from our Market Terminal dataset. High means momentum; low means crowded, commoditized, or both. The per-category breakdown lives at /browse/[category].

Rising 10 composite

The monthly Rising 10 ranking uses a different composite than Pulse Heat Score — narrower, and designed to be citable in a single sentence. Three inputs, declared weights, deterministic output.

rising_10_composite =
    0.40 × signal_weight       (hot → 100, rising → 70)
  + 0.35 × opportunity_score   (already 0–100)
  + 0.25 × rating_rescaled     (0–5 → 0–100)

Filter: only tools tagged hot or rising with a non-null opportunity score are eligible. Weights are declared in lib/articles/rising-10-schema.ts; changing them is a methodology change and gets a changelog entry below.

Pulse issue curation

Each weekly Pulse issue is generated from a frozen snapshot of the dataset, then curated by the editorial desk. The sections — Top Movers, Breakouts, Hidden Opportunities, Risk Watch, Graveyard, and Category Heat — use different filters of the same source data. Breakouts require a created_at inside the last 30 days; Hidden Opportunities require opp > 75 and saturation < 40; Risk Watch surfaces declining signals and wrapper risk > 75. The Brief at the top is written by a human against the snapshot — not LLM-generated.

Update cadence

Underlying tool signals refresh continuously as the ingest pipeline pulls new evidence. The Heat Score view recomputes nightly at 5:30am CT. A full platform snapshot is written at 5:45am CT. Pulse publishes every Monday at 6:00am CT from that snapshot. Rising 10 publishes on the 23rd of each month, also from a frozen snapshot so the ranking is reproducible.

Citation format

When citing Falcoscan numbers in articles, threads, or research, please use the canonical form: “Source: Falcoscan, [series] [issue label].”

For Pulse numbers: link to the dated issue URL (/articles/pulse/YYYY-MM-DD) or the number alias (/articles/pulse/issue/N). For Rising 10: link to the monthly issue (/articles/rising-10/[month]-[year]). For category-level citations: link to the category page (/browse/[category]). All three URLs are permanent.

Changelog

v1.1 — consolidated methodology. Heat Score, Rising 10 composite, Opportunity, Saturation, Wrapper Risk, and Growth Signal now documented at one canonical URL (/methodology). /pulse/methodology 301s here. No formula weights changed in this revision.

v1.0 — April 2026. Initial public methodology. Heat Score formula fixed at the weights above. Category Heat uses a 50-base composite with saturation and wrapper-risk penalties. Recency term degrades gracefully when observation timestamps are missing, via a cascade through the most recent refresh event.

Limitations, honestly

Every score here is an opinionated signal, not an oracle. Growth signal depends on editorial judgment, which means it reflects the Falcoscan team's read of the market — not an impartial measurement. Opportunity and Saturation models are tuned against our own definition of “buildable” and will not be identical to a VC's TAM model or a Gartner category map. Wrapper Risk is blunt; a tool with high wrapper risk may still win its category on distribution or UX, and a tool with low wrapper risk can still lose. If you disagree with a categorization, you'll be disagreeing with a human, and we'll hear you out. Any material change to the methodology is versioned and announced in the changelog above.

Questions
Spotted a flaw in the method? Tell us.

Methodology critiques, attribution asks, or data integrity concerns — one inbox, real replies.

falcoscan@outlook.com