Most GPX video overlay tools are desktop apps. Import a file, align it to a video timeline, export. They work. They're also stuck in 2015 — manual syncing, clunky UIs, Windows-only half the time.
TelemetryKit takes a different approach: upload a GPX or FIT file to a web app, preview four telemetry widgets in the browser, render H.265 MP4 clips serverlessly on AWS Lambda, and download the result. No desktop install. No video editor. No After Effects template.
The interesting technical decision behind it: the rendering engine is Remotion — a React framework designed for programmatic video. Typically used for marketing clips and social media animations. Using it for data-driven cycling telemetry overlays is an unusual fit. Here's why it works, how the architecture holds together, and where the trade-offs land.
The Problem with Existing GPX Video Overlay Tools
Cycling content creators — YouTube channels, race recap editors, coaching platforms — need metric overlays on their footage. Speed, power, heart rate, gradient. The data lives in GPX or FIT files exported from Garmin, Wahoo, or Hammerhead devices.
The current options:
- Desktop apps like Dashware or Garmin VIRB Edit. Powerful, but discontinued or barely maintained. Garmin stopped updating VIRB Edit years ago. Dashware still runs but feels like 2012.
- After Effects templates. Flexible, but require AE skills, manual data import, and a Creative Cloud subscription. Overkill for a 30-second power meter overlay.
- FFmpeg scripts. The programmer's answer to everything. Write a script that reads GPX, generates SVG frames, composites them onto video. It works, but maintaining a custom FFmpeg pipeline for each widget style is tedious. No preview. No iteration loop.
None of these are "upload a file and get an overlay clip" solutions. That's the gap.
What TelemetryKit Does
The flow is straightforward:
- Upload a
.gpxor.fitfile. - The server parses the file, builds a 15 fps timeline, and extracts per-frame data series — speed, power, heart rate, gradient, cadence (when available).
- The browser shows four live widget previews: SpeedGauge, PowerMeter, GradeMeter, HeartRate. Each driven by the same timeline data.
- Pick a render duration (full ride or a shorter window — 30s, 1m, 5m, up to 90m), pixel scale (1× or 2×), and background mode (solid color or green screen).
- Trigger a render. AWS Lambda picks up the job, renders an H.265 MP4 at the widget's native resolution, and produces a presigned download URL.
- Download the clip. Composite it onto your cycling footage in any video editor.
The output is a standalone MP4 per widget — not a full-frame overlay composited onto your video. You get a small clip (400×400 for SpeedGauge at 1×, for example) with either a solid background or green-screen background. Drop it into Premiere, Final Cut, DaVinci Resolve, or CapCut with chroma key or a blend mode. This keeps TelemetryKit's job simple: render clean, accurate metric widgets. Let the video editor handle compositing.
Why Remotion, Not FFmpeg or Canvas
This is the architectural decision worth explaining.
The FFmpeg-only approach
You could parse GPX data, generate SVG or PNG frames with Node.js (sharp, canvas, or a template engine), then pipe those frames into FFmpeg to produce an MP4. It works. I've done it for simpler overlay tasks.
The problem: there's no preview. To see what the widget looks like at frame 847, you have to render frame 847. To iterate on the design — change a font, adjust a needle angle, tweak an easing curve — you re-render the whole sequence. Development cycle time is measured in minutes per design change.
The canvas approach
Render widgets in a browser canvas, record the canvas to a MediaStream, pipe to MediaRecorder. Client-side, no server needed.
The problem: MediaRecorder output quality is inconsistent across browsers. You can't get H.265 reliably. Frame timing isn't deterministic — if the browser stutters, your output stutters. Fine for screen recordings. Not fine for frame-accurate telemetry data where the speed gauge needs to match the exact frame.
The Remotion approach
Remotion (v4, currently at v4.0.448) treats video as a React component tree. Each frame is a function of frame number and fps. You write React components that accept a frame index and return JSX. Remotion renders each frame as a browser screenshot and encodes them into a video file.
What this gives TelemetryKit:
- React components as widgets. SpeedGauge is a React component. It receives
speedfor the current frame, renders an SVG gauge. Standard React development — hot reload, component composition, TypeScript. - Deterministic frame output. Frame 847 at 15 fps always renders the same way. No MediaRecorder timing jitter.
- In-browser preview via Remotion Player. The same React component that renders the final video also powers the live preview in the upload page. One codebase, two contexts.
- Serverless rendering via Remotion Lambda. Remotion has a first-party Lambda integration. Deploy a Lambda function, send it a composition ID and input props (the data series), it renders and uploads to S3. No persistent render server. Pay per invocation.
- H.265 output. Remotion uses FFmpeg under the hood for encoding. You specify
codec: 'h265',pixelFormat: 'yuv420p', a CRF value, and it handles the rest.
The trade-off: Remotion is heavier than a raw FFmpeg pipeline. There's a Chromium instance running inside Lambda, rendering React components to screenshots. It's not the most resource-efficient way to produce a 400×400 speed gauge. But the developer experience — preview, iterate, deploy — is dramatically better. For a POC where widget design changes weekly, that matters more than Lambda cost optimization.
Architecture: GPX to MP4
Here's how the pieces connect:
┌─────────────┐
│ Browser UI │
│ (Next.js) │
└──────┬───────┘
│ POST /api/upload
▼
┌──────────────┐ Direct upload
│ Upload API │───────────────────► S3 (raw GPX/FIT)
└──────┬───────┘
│ POST /api/upload/complete
▼
┌──────────────────┐
│ Parse & Timeline │ ← gpx-parser / fit-parser
│ 15 fps series │ ← speedMps[], power[], hr[], gradePct[]
└──────┬───────────┘
│ JSON response
▼
┌──────────────────┐
│ Remotion Player │ ← Client-side preview (same React components)
│ 4 widget previews│
└──────┬───────────┘
│ POST /api/lambda/render
▼
┌──────────────────┐
│ Remotion Lambda │ ← Renders single composition
│ (AWS Lambda) │ ← H.265 MP4, CRF 23, 15 fps
└──────┬───────────┘
│ S3 upload (private, 3-day TTL)
▼
┌──────────────────┐
│ Presigned URL │ ← Download link returned to browser
└──────────────────┘
File parsing
GPX 1.1 files contain trackpoints with lat, lon, time, and elevation. Extensions may include power, heart rate, and cadence — depends on the device and the recording app. FIT files (Garmin's binary format) are more reliable for sensor data but need a dedicated parser.
The server reads the uploaded file, extracts all available channels, interpolates them to a fixed 15 fps timeline, and returns the data as arrays. Speed is derived from GPS positions and timestamps. Gradient is derived from elevation changes over distance. Power and heart rate come directly from the file when present.
Preview
The browser receives the data arrays and feeds them into Remotion Player instances — one per widget. Each player runs the same React composition that Remotion Lambda will render. The user sees the gauge needles move, the power numbers tick, the heart rate pulse — all driven by real ride data.
This is the key architectural benefit of Remotion: preview and production render use identical code. What you see in the browser is what you get in the MP4. No "preview looks different from export" problems.
Rendering
When the user triggers a render, the Next.js API route calls renderMediaOnLambda() from the @remotion/lambda package. It passes:
- The composition ID (e.g.,
SpeedGauge) - Input props containing the data series for the selected duration window
- Codec settings:
h265,yuv420p, CRF 23 - Resolution: the widget's native size × the chosen pixel scale (1× or 2×)
- Background color or green-screen mode
Lambda spins up, renders every frame, encodes the video, uploads the MP4 to S3 with a private ACL and a 3-day auto-deletion policy. The client polls a progress endpoint until the job completes, then gets a presigned download URL.
Cost estimation
The server estimates the AWS render cost for each job — based on Lambda duration, memory allocation, and S3 storage. In the POC, this estimate is displayed in the UI for internal validation. The numbers help calibrate a future pricing model. A short widget render (30 seconds of ride data at 15 fps = 450 frames) costs fractions of a cent in Lambda compute time.
The Four Widgets
Each widget is a standalone Remotion composition with its own native resolution:
| Widget | Data channel | Size (1×) | Size (2×) |
|---|---|---|---|
| SpeedGauge | speedMps → km/h | 400×400 | 800×800 |
| PowerMeter | power (watts) | 320×200 | 640×400 |
| GradeMeter | gradePct | 320×160 | 640×320 |
| HeartRate | hr (bpm) | 240×120 | 480×240 |
The 2× option exists for high-resolution video workflows (4K footage). At 1×, the widgets are sized for 1080p compositing — small enough to sit in a corner without dominating the frame.
Green-screen mode sets the background to pure green (#00ff00) and forces the widget drawing color to either black or white for clean chroma keying. Solid color mode lets the user pick black, white, or a custom color — useful when you want a background that matches your video's color grade.
Why 15 fps and Not 30 or 60
Cycling telemetry data typically records at 1 Hz (one sample per second). Some devices record at 2–4 Hz for certain channels. Interpolating 1 Hz data to 60 fps creates 60 frames where 59 are fabricated — the gauge needle moves smoothly, but the data precision is an illusion.
15 fps is a pragmatic middle ground. Smooth enough that the overlay doesn't look like a slideshow when composited over 30 fps footage. Honest enough that the frame-to-frame changes reflect real data intervals. And it halves the frame count compared to 30 fps, which directly halves Lambda render time and cost.
Whether to offer 30 fps as an option in a future version is an open question. For now, 15 fps keeps the scope tight and the render costs low.
Current Status
TelemetryKit is in validation. The end-to-end flow works: upload → preview → render → download. The landing page collects waitlist signups. There are no user accounts and no billing. The POC exists to validate whether this workflow solves a real problem for cycling content creators before building the business layer.
If the validation signals are positive, the next steps are straightforward: user accounts, a credits-based billing model tied to render cost, batch rendering (all four widgets in one job), and expanding the widget library beyond the initial four.
What I Learned Building It
A few things worth noting for anyone considering Remotion for data-driven video:
Remotion Lambda cold starts matter. The Lambda function includes a Chromium binary. Cold starts are noticeable — several seconds before rendering begins. For a batch render triggered by a user who's willing to wait 30 seconds, this is acceptable. For real-time or near-real-time use cases, it's not.
Data serialization is the bottleneck, not rendering. A 90-minute ride at 15 fps produces 81,000 frames. The data arrays (speed, power, hr, gradient for each frame) are large JSON payloads. Passing them as input props to Lambda requires careful size management. For the POC, this works. For long rides at higher fps, a different data transport — S3 reference, chunking — would be necessary.
Green screen at small resolutions is tricky. A 240×120 HeartRate widget with green-screen background has very few pixels of edge. Chroma keying in the video editor can eat into the widget graphics if the key tolerance is too wide. The solid-color background option exists partly because green screen at these sizes isn't always clean enough.
React is a reasonable video authoring language. This sounds strange, but it's true. Component composition, props-driven rendering, TypeScript type safety on the data model — these are genuine advantages when building visual components that need to render identically across thousands of frames. The Remotion Player for previews seals the deal. No other video framework gives you "edit the component, see the result instantly in the browser."
The Stack
For reference, TelemetryKit runs on:
- Frontend: Next.js (App Router), TypeScript, Tailwind CSS
- Video framework: Remotion v4 (compositions + Lambda rendering)
- Rendering: AWS Lambda via
@remotion/lambda, H.265 encoding - Storage: S3 (upload + rendered output, private ACL, 3-day TTL)
- Parsing: GPX 1.1 and FIT file parsers (server-side)
- Hosting: Vercel (frontend), AWS (Lambda + S3)
No database in the POC. No auth. No billing. The entire state lives in the upload-render-download cycle. This is intentional — the fastest way to validate whether the core workflow has value before adding infrastructure complexity.