Case study · Fintech · 2023

Accounting API Sync4 providers · one trait · zero drift

A bidirectional sync engine for QuickBooks, Xero, Wave, and AccountEdge with explicit conflict resolution — built for an SMB accounting SaaS whose prior integration was a polling nightmare.

4
Providers unified
behind one trait
18
Entities synced
invoices, bills, etc.
0
Drift incidents
since GA
< 30s
Sync latency
edit-to-reflect
Accounting API sync architecture — Temporal workflow engine on top, a single Go provider trait below it with four adapters (QuickBooks, Xero, Wave, AccountEdge), Postgres event log, explicit conflict policy, AWS ECS and SQS.
Temporal workflow · Go provider trait · four adapters · explicit conflict policy
How it works · step by step

The diagram, walked through in plain language

  1. 1
    Four accounting tools behind one Go interface

    QuickBooks, Xero, Wave, and AccountEdge each have very different APIs. We hide them behind a single interface so the rest of the system treats them identically — adding a fifth provider becomes a few-day job, not a rewrite.

  2. 2
    Webhooks where possible, polling where not

    When the accounting tool supports webhooks (real-time push notifications), we use them. Wave and parts of AccountEdge don't, so we poll those — but only as often as that customer actually changes things.

  3. 3
    Every sync is a Temporal workflow

    Temporal.io is a tool that makes long-running, multi-step jobs reliable. Each sync is a workflow: fetch both sides, compare, apply rules, write the result, log it. Retries and timeouts are built in.

  4. 4
    Conflicts are surfaced, not hidden

    If the same invoice was edited on both sides, the system applies a per-customer policy ('app wins' or 'raise for human review') rather than silently overwriting. Finance teams hate silent overwrites.

  5. 5
    Backfills can resume

    Importing two years of history can take hours. If the connection drops mid-import, Temporal restarts from the last completed step rather than starting over.

  6. 6
    Costs dropped because the system runs only when needed

    The old system polled 24/7; this one wakes up only when there is actual work, cutting AWS spend to about a third.

The brief

The client sold an accounting add-on to small businesses. Their existing integration layer polled each provider every fifteen minutes, diffed blobs of JSON, and hoped for the best. It was a constant source of “why is this invoice wrong?” tickets and a meaningful fraction of their AWS bill was wasted polling customers who hadn't changed anything.

They wanted webhook-driven, bidirectional, and reliable. They wanted the integration layer to stop being a source of support load.

The constraints

  • Four providers with radically different APIs: OAuth flavors, rate limits, entity models, webhook support (or lack of it).
  • Bidirectional sync. Changes originate in either the app or the accounting tool; both must converge.
  • Conflict resolution had to be explicit, not last-write-wins. Finance teams do not forgive silent overwrites.
  • No user-visible sync delay for common flows — edit an invoice, see it reflected within 30 seconds.
  • Backfills had to be resumable. A 2-year import cannot redo itself on a dropped connection.

The shape we built

One Go trait (“provider”) with a stable entity vocabulary (invoice, bill, account, contact, etc.), four implementations behind it. Each provider adapter owns its OAuth, rate limiting, and webhook parsing; the rest of the system doesn't know or care which provider is on the other side.

Temporal runs every sync as a workflow. A change event — from a webhook, a user edit, or a scheduled reconciliation — triggers a workflow that: fetches the current state from both sides, compares, applies the conflict policy, writes the resolution, and emits an audit event. Retries, timeouts, and compensations live in the workflow definition, not the business logic.

The conflict policy is configurable per entity and per customer. “Invoices: app wins.” “Contacts: accounting tool wins.” “Tax rates: always raise a conflict for human review.” The default is raise-for-review, not silent resolution.

What was hard

  • Wave doesn't have webhooks. AccountEdge has them for half the entities. We ended up with a hybrid model: webhooks where available, targeted polling where not, with polling intervals scaled by historical write rate per customer.
  • OAuth token rotation. Three providers rotate differently, one rotates silently during normal API calls. Token refresh had to be centralized and paranoid.
  • Idempotency. A retried webhook must not create a duplicate invoice. Provider-native idempotency keys, where supported, plus our own event-hash layer underneath.

What it does today

In production for over two years. Handles eighteen entity types across four providers with a median edit-to-reflect latency under 30 seconds. Zero drift incidents since GA. AWS spend on the sync layer is roughly a third of the old polling architecture because the workflow engine only runs when there's actual work to do.

What I'd do differently

I'd model the entity schema as a versioned contract from the first commit. We shipped v1 without one, which made it painful to add QuickBooks' class-tracking fields in month eight without touching every call site. Versioned schemas from day one are a small amount of ceremony you thank yourself for later.

Stack
  • Go
  • gRPC (internal API)
  • Postgres (sync state + event log)
  • AWS ECS + SQS
  • Temporal.io (workflow orchestration)
More work

Continue the tour

Algo Trading · 2025

Order Router & Execution Engine

$80M routed · 38ms p99 · zero downtime

A trading desk's chart fires a buy or sell signal; this system safely turns each signal into a real order at the right brokerage in milliseconds — while quietly making sure they never trade more than they meant to or place an order they can't afford.

Read case study
AI / LLM · 2024

AI Content Platform

10K daily users · 12 models · 35% lower cost

A SaaS that generates marketing-style writing (articles, ads, product copy) for thousands of paying users — intelligently picking the cheapest AI model that can do each job well, and switching providers in seconds when one of them goes down.

Read case study
Fintech · 2024

Fintech Reporting Dashboard

200M rows · 60% faster · sub-second queries

A financial dashboard that used to take seven seconds to show 'this month's profit and loss' now takes half a second — because we moved the heavy reports off the live database without changing a single number the customer's accountant sees.

Read case study
SaaS · 2024

JobbyAI

resume scoring · job match · interview prep

A free web app that helps job seekers in three ways: it scores their resume, ranks how well they match a job posting, and prepares them for the interview — all using a single AI model behind the scenes, with no signup required to try it.

Read case study
Algo Trading · 2023

Quant Backtest Harness

50K parameter combos · 3 engines · one CLI

A single command-line tool that lets a quant team test trading strategies on three different simulation engines without rewriting any strategy code — and then compares the results in one shared format, so 'which strategy is actually better' becomes a question with a real answer.

Read case study
AI / LLM · 2025

Multi-LLM Agent Runtime

OpenAI · Claude · Gemini · Grok

A small, stateless service that lets non-engineers wire up AI 'agents' (which can call tools, look things up, and reply) — running across four AI providers so a single outage never takes a customer offline, and replay-able to the byte for debugging.

Read case study
Algo Trading · 2024

TradingView ↔ Plaid Bridge

webhook in · broker-native out · 4 signal types

A bridge that takes 'buy' or 'sell' alerts from TradingView charts, checks the user actually has the cash via their bank link (Plaid), then sends the order to their brokerage — all in under a fifth of a second, so the price they wanted is still the price they get.

Read case study
DevTools · 2023

Figma + Chrome Plugin Suite

design · engineering · less friction

Three small browser plugins that quietly fix the slow, fiddly hand-off between designers (working in Figma) and engineers (writing code) — saving each engineer about four hours a week of busywork that nobody was tracking, but everyone resented.

Read case study

Have a similar problem?

If this shape of engagement fits what you're working on, I'd be happy to scope it.