How the Technology Radar Works — A Brief Guide

Overview of the Technology Radar scoring algorithm: four data sources, composite weighting, movement classification, and known limitations. Full methodology on wetheflywheel.com.

Technology evaluation methodology with scoring matrices and frameworks on screens
Technology evaluation methodology with scoring matrices and frameworks on screens
Technology radar methodology

Why I Built a Data-Driven Radar

Technology moves faster than any individual — or advisory board — can track. New frameworks launch weekly. AI tools go from zero to production in months. What's "emerging" in January is table stakes by June. Traditional technology radars, published semi-annually or annually, simply can't keep pace.

Having spent years on both sides — leading engineering teams that build products and advising PE/VC firms that evaluate them — I saw the same gap from every angle. Vendors pitch optimistic roadmaps. Analyst firms publish subjective opinions behind paywalls. Open-source hype inflates GitHub stars. The people making technology decisions had no unbiased, data-driven signal they could trust weekly.

That's why I built this radar: to be more objective than subjective. To combine multiple independent data sources into a transparent, reproducible score. To update every Monday, not twice a year. And to be free — because technology intelligence shouldn't be a luxury.

This page provides a brief overview of how the scoring works. Every score is computed by a deterministic algorithm — no manual overrides, no editorial bias.

Data Sources

Each tool is scored using four independent data sources, collected weekly:

1. Google Trends (Weight: 25%)

Search interest data via the DataForSEO API: current interest level (0-100), year-over-year change, and month-over-month momentum. This captures public awareness and mindshare — a tool with rising interest is entering more conversations and searches.

2. GitHub Activity (Weight: 25%)

For open-source tools: total stars (log-normalized), stars growth, commit frequency, and active contributors over the last 30 days. Proprietary tools without a GitHub repo have their composite score re-normalized across the remaining sources.

3. Expert Network Signal (Weight: 30%)

The radar's key differentiator. Aggregated, anonymized mention counts from expert network call logistics across Tegus/AlphaSense, Office Hours, Third Bridge, Arbolus, Capvision, and Guidepoint. Higher mention counts indicate PE/VC firms are actively evaluating a technology — a leading indicator that often precedes public adoption by 6-12 months.

Privacy note: Only aggregated counts and network names are stored. No email content, subjects, senders, or confidential information is stored or published.

4. Search Volume (Weight: 20%)

Monthly search volume and keyword difficulty from DataForSEO. Search volume indicates market interest; keyword difficulty indicates how established the tool is in search.

Composite Score

The composite trend score is a weighted average of the four component scores:

trend_score = (trends x 0.25) + (search x 0.20) + (github x 0.25) + (expert x 0.30)

When a source is unavailable (e.g., no GitHub repo for a proprietary tool), its weight is redistributed proportionally to the remaining sources. Scores are smoothed with EWMA to reduce week-to-week noise and movement is classified using 12-week deltas with hysteresis rules to prevent churn.

Movement Classification

Movement Criteria
Rising Score >= 65 AND (12-week delta >= +7 OR expert mentions >= 3)
Emerging Score 40-65 AND (12-week delta > 0 OR expert mentions >= 2)
Stable Score 30-70 AND |12-week delta| < 5
Declining Score < 30 OR 12-week delta <= -7 (and mentions not increasing)
New Fewer than 4 weeks of data available

Known Limitations

  • Open-source bias — GitHub signals are only available for open-source tools. Proprietary enterprise software may be under-represented.
  • English-centric — Google Trends and search volume data is US-market focused. Regional trends may differ.
  • Expert network scope — Expert mentions reflect PE/VC and consulting interest, which skews toward enterprise and growth-stage technologies.
  • New tools — Recently added tools start with "New" classification and require 4+ weeks before meaningful movement detection.
  • Name collisions — Some tool names overlap with common words. The alias system helps, but false matches are possible.

For the full technical deep-dive — normalization formulas, EWMA parameters, hysteresis thresholds, confidence scoring, and the weekly pipeline architecture — see the complete methodology on wetheflywheel.com.

For CTOs & Tech Leaders

Need Expert Technology Guidance?

20+ years leading technology transformations. Get a technology executive's perspective on your biggest challenges.