Case study · Algo Trading · 2024

TradingView ↔ Plaid Bridgewebhook in · broker-native out · 4 signal types

A webhook bridge translating TradingView alerts into broker-native orders, with bank-side risk checks via Plaid balance feeds — for a retail algo-trading product that needed the path from idea to fill to be boringly reliable.

180ms
Webhook-to-broker
median end-to-end
4
Signal types
entry · exit · scale · flat
3.1%
Rejected at risk gate
caught before broker
99.97%
Uptime
last 180 days
TradingView to Plaid bridge architecture — left-to-right latency pipeline from TradingView webhook through a Cloudflare HMAC edge, into a Go signal router with risk gate and Plaid balance check, out to the Alpaca broker, with a latency ruler marking each hop.
Left-to-right latency pipe: webhook → HMAC edge → risk gate → broker in 180ms
How it works · step by step

The diagram, walked through in plain language

  1. 1
    TradingView fires a webhook

    When the user's chart hits an alert condition, TradingView sends a small message to our system saying 'buy 10 shares of X for user Y'.

  2. 2
    The edge filters fakes

    A Cloudflare Worker (a tiny program at the network edge) checks the message has a valid signature. Forged or replayed messages get dropped before they cost us anything.

  3. 3
    The signal gets normalised

    All real signals get translated into one of four canonical shapes: enter a position, exit one, scale up or down, or flatten everything.

  4. 4
    A Plaid balance check happens

    Before placing the order we check the user's actual bank balance via Plaid (cached aggressively to keep latency down). No point routing an order they cannot fund.

  5. 5
    A risk gate runs

    Per-user limits ('not more than $X per trade') and product-wide circuit breakers ('halt all trading if X happens') are applied. About 3% of signals get rejected here, before reaching the broker.

  6. 6
    The order goes to Alpaca

    Approved signals become real orders at the broker. Every step is logged to a Postgres ledger so support can answer 'why didn't my alert fire?' with a precise reason rather than a shrug.

The brief

A retail algo-trading product let users wire TradingView alerts directly to their brokerage. The V1 was a Zapier-grade pipeline: a webhook, a lambda, a broker call, a lot of hope. When it worked, it was magical. When it didn't, support got a screenshot of an unfilled order and a customer who had watched the market move without them.

The ask was for the bridge to stop being the weakest link. Authenticate the webhook, validate the signal, check that the user has the cash, place the order, log everything, and do all of it faster than the market can notice.

The constraints

  • End-to-end latency from TradingView webhook to broker acceptance under 250ms at p95.
  • Plaid balance check on every non-trivial order — you do not want to learn about insufficient funds from the broker.
  • Webhook authenticity verifiable from signature, not source IP. TradingView's egress IPs are not a stable contract.
  • Idempotency across duplicated webhooks, flaky networks, and user panic-clicks.
  • Clear, user-visible rejection messages — not just “order failed,” but why.

The shape we built

A Cloudflare Worker at the edge authenticates the webhook by HMAC signature and normalizes the payload into a canonical signal type (entry, exit, scale, flat). Invalid or replayed webhooks get dropped there — they never hit the origin.

Valid signals hit a Go service. It reads the user's current broker position, fetches a recent Plaid balance (cached aggressively, refreshed on rejection), runs a risk gate that enforces per-user limits and product-wide circuit breakers, and submits the order. Every step writes to a Postgres order ledger keyed on signal hash.

Idempotency is keyed on (user, symbol, signal-hash, window) — the same webhook fired twice within the de-duplication window gets exactly one order; outside the window it gets two, because that's what the user meant.

What was hard

  • Plaid freshness. Balances lag. A user deposits, sees their balance in-app, fires an order, and the bridge has a stale Plaid reading. We cache aggressively but bust on certain event types — and have a fallback path that accepts broker-side balance as authoritative.
  • Partial fills. TradingView alerts are boolean. Brokers fill in pieces. The ledger has to represent “signal served” as a first-class state separate from “order filled.”
  • Rejection UX. A rejection at the risk gate is a different user message than one at the broker. We built a small taxonomy of rejection reasons and surfaced them verbatim in-app.

What it does today

Median webhook-to-broker time is 180ms. The risk gate rejects roughly 3.1% of incoming signals before they hit the broker — insufficient balance, circuit-breaker trip, stale signal, or product-level limit. Uptime has held at 99.97% over the last 180 days. Support volume related to “my alert didn't fire” is down by more than 80% since cutover.

What I'd do differently

I'd build a dry-run mode from day one. Users routinely want to wire a brand-new strategy and watch it would-have-fired for a week before going live. Retrofitting dry-run took a sprint; shipping it alongside the v1 would have taken two afternoons.

Stack
  • Go
  • TradingView webhooks
  • Plaid (balances, identity)
  • Alpaca (primary broker)
  • Postgres (order ledger)
  • Cloudflare Workers (edge auth)
More work

Continue the tour

Algo Trading · 2025

Order Router & Execution Engine

$80M routed · 38ms p99 · zero downtime

A trading desk's chart fires a buy or sell signal; this system safely turns each signal into a real order at the right brokerage in milliseconds — while quietly making sure they never trade more than they meant to or place an order they can't afford.

Read case study
AI / LLM · 2024

AI Content Platform

10K daily users · 12 models · 35% lower cost

A SaaS that generates marketing-style writing (articles, ads, product copy) for thousands of paying users — intelligently picking the cheapest AI model that can do each job well, and switching providers in seconds when one of them goes down.

Read case study
Fintech · 2024

Fintech Reporting Dashboard

200M rows · 60% faster · sub-second queries

A financial dashboard that used to take seven seconds to show 'this month's profit and loss' now takes half a second — because we moved the heavy reports off the live database without changing a single number the customer's accountant sees.

Read case study
SaaS · 2024

JobbyAI

resume scoring · job match · interview prep

A free web app that helps job seekers in three ways: it scores their resume, ranks how well they match a job posting, and prepares them for the interview — all using a single AI model behind the scenes, with no signup required to try it.

Read case study
Algo Trading · 2023

Quant Backtest Harness

50K parameter combos · 3 engines · one CLI

A single command-line tool that lets a quant team test trading strategies on three different simulation engines without rewriting any strategy code — and then compares the results in one shared format, so 'which strategy is actually better' becomes a question with a real answer.

Read case study
Fintech · 2023

Accounting API Sync

4 providers · one trait · zero drift

A behind-the-scenes service that keeps an accounting SaaS in sync with QuickBooks, Xero, Wave, and AccountEdge — when a customer edits an invoice in either place, the change shows up on the other side within 30 seconds, without ever silently overwriting work.

Read case study
AI / LLM · 2025

Multi-LLM Agent Runtime

OpenAI · Claude · Gemini · Grok

A small, stateless service that lets non-engineers wire up AI 'agents' (which can call tools, look things up, and reply) — running across four AI providers so a single outage never takes a customer offline, and replay-able to the byte for debugging.

Read case study
DevTools · 2023

Figma + Chrome Plugin Suite

design · engineering · less friction

Three small browser plugins that quietly fix the slow, fiddly hand-off between designers (working in Figma) and engineers (writing code) — saving each engineer about four hours a week of busywork that nobody was tracking, but everyone resented.

Read case study

Have a similar problem?

If this shape of engagement fits what you're working on, I'd be happy to scope it.