
Preparing page
Loading product graph, proof, and adoption context.
Loading
Preparing tokens&
Loading the next builder or enterprise surface.
Loading
Preparing page
Loading product graph, proof, and adoption context.
LoadingMethodology
If procurement or press asks “how do you know?”, this is the page you send them. Every number in the product and the monthly report is derived from the pipeline documented below.
Two sources feed the graph. Directory-side: public signals such as tool-page views, search queries, challenge submissions, claims, and self-reported stacks. Enterprise-side: events sent by opted-in organisations through our SDK, the /ingest/events API, or marketplace connectors.
No data is scraped. No data is purchased from third parties. Every event has an organisation-scoped tenant identity, a timestamp, an actor, and a verifiability bit.
Aggregates published on the public graph are bounded by a minimum cohort size (N ≥ 7 by default, configurable up to 50) and by additive Laplace noise parameterised by an ε that defaults to 1.5. Both thresholds are exposed per-organisation and published in the API response.
Developer identifiers never leave the tenant boundary. PII is hashed at ingest, never joined cross-tenant, and not used in any public surface.
Opt-out is immediate. Once an organisation opts out, their aggregates stop contributing to future benchmarks. Previously published benchmarks remain but cannot be reverse-joined to a single tenant because of the minimum-N guarantee.
The adoption index is a weighted composite of verified activations (50%), unique developers (30%), and 30-day retention (20%), normalised per-category to a 0-100 scale. Weights are fixed for the year and published with every State of Developer Adoption report.
Rank movements are computed week-over-week with a 90-day half-life decay so a one-off spike does not dominate the leaderboard.
Category benchmarks (p25, p50, p75, p90) are only published when the opted-in cohort is at or above the minimum N. If a buyer asks for a cohort view under that threshold, we return "suppressed" and explain why rather than extrapolating.
Retention cohorts use 30/60/90 day windows indexed on first activation. We exclude organisations that contribute fewer than 10 developers to avoid skew.
We do not claim causation. The product attributes developer activity to a program with a lift estimate and confidence interval. Where there is not enough signal, we surface that explicitly.
We do not rank on money. Customer tier does not influence ranking or benchmark inclusion. Ranking is a function of the data, full stop.
The methodology is versioned. Any change that would move a previously published number by more than 1% is gated behind a 30-day notice window and a re-publish of the affected reports with a methodology-diff footer.
Our SOC 2 Type I report is available on request under NDA. Type II is in progress.