← Back to blog

Signal Dash: A Lightweight ExperimentOps Dashboard

ExperimentOpsA/B TestingExperimentationProduct ManagementAnalyticsSQLPostgreSQLNode.jsExpress.jsAILLMTPM

I built Signal Dash to cover a gap I kept seeing in “TPM-sounding” resumes (including mine): lots of talk about experimentation, dashboards, metrics, and process… but not enough public proof.

Signal Dash is an ExperimentOps Lite tool that takes experiments from intake → approval → running → readout → decision, tracks exposures/conversions, computes basic A/B stats, and generates an optional AI-assisted readout (with a deterministic fallback).


What Signal Dash Does

1) Intake → Workflow States → Decision Logging

Experiments move through a simple lifecycle:

Draft → Review → Running → Readout → Shipped/Killed

This is intentionally “small tooling”—the kind of thing teams actually build internally when they don’t want a full platform.

2) Event Tracking API

There’s a single tracking endpoint (POST /api/track) for:

  • exposure
  • conversion

Under the hood, Signal Dash focuses on data integrity first, because bad experiment data makes every downstream “insight” worthless.

3) Distinct-User SQL Analytics (Not Pageview Math)

Signal Dash treats the unit of analysis as distinct users (user_key) and computes exposures/conversions using COUNT(DISTINCT user_key).

That’s an important interview detail because it shows you’re not inflating results with duplicate events.

4) A/B Stats You Can Explain in 60 Seconds

Signal Dash uses a pragmatic stats set:

  • Conversion rate per variant
  • Lift
  • 2-proportion z-test p-value
  • 95% confidence interval
  • Warnings for small samples / imbalance / missing exposure data

No Bayesian wizardry. No “trust me bro” math. Just enough to make a call and defend it.

5) Optional AI Readout (With Caching + Fallback)

Signal Dash can generate an AI summary (OpenAI or Anthropic), but it’s:

  • Optional
  • Cached in the DB (so you don’t re-pay for every refresh)
  • Guarded with timeouts
  • Backed by a deterministic fallback template if the AI is unavailable

This keeps the AI integration practical and interview-safe.


DB Constraints First

Signal Dash enforces correctness at multiple layers:

  • DB CHECK constraint ensures event_type is only 'exposure' or 'conversion'
  • Unique index dedup prevents duplicates at (experiment_id, user_key, event_type)
  • Atomic insert uses INSERT ... ON CONFLICT DO NOTHING
  • Variant integrity check validates variantId belongs to experimentId

Translation: The system won’t quietly accept garbage and then “analyze” it.


Architectural Assumptions (Explicit on Purpose)

Signal Dash makes a few clean MVP assumptions:

  1. Assignment happens upstream. Signal Dash does not randomize users. Your app assigns variants and passes the correct variantId.

  2. Unit of analysis is distinct user_key (distinct users, not events).

  3. Deduplication is enforced at (experiment_id, user_key, event_type) via a unique index.

  4. Stats are approximate (z-test with warnings for small samples / imbalance).

These assumptions are stated up front in the README so there’s no ambiguity about what the tool does and doesn’t handle.


Why I Built It

I have a passion for tinkering with:

  • SQL analytics & metric definitions
  • Workflow/process thinking (intake → states → decision)
  • API + backend fundamentals (Express + Postgres)
  • Data integrity + correctness (constraints, dedup, race-condition avoidance)
  • Practical AI usage (summaries + caching + fallback)

What I’d Build Next (If I Wanted to Go Beyond MVP)

If I expand Signal Dash later, the next steps are predictable:

  • Basic auth
  • Rate limiting / abuse prevention on /api/track
  • Audit logging
  • Multi-metric experiments + funnels
  • More robust assignment / bucketing model (still could remain “upstream,” but documented)

But for now, the goal is: a clean, explainable, interview-ready repo.


Key Implementation Details

SQL Query Optimization

The stats page originally had an N+1 query problem (loading variants, then querying event counts for each). I optimized it to a single aggregate query using:

SELECT
  v.id AS variant_id,
  v.name AS variant_name,
  v.is_control,
  COUNT(DISTINCT CASE WHEN e.event_type = 'exposure' THEN e.user_key END) AS exposures,
  COUNT(DISTINCT CASE WHEN e.event_type = 'conversion' THEN e.user_key END) AS conversions
FROM variants v
LEFT JOIN events e ON e.variant_id = v.id AND e.experiment_id = v.experiment_id
WHERE v.experiment_id = $1
GROUP BY v.id, v.name, v.is_control

This reduces database round-trips from 5 to 1 for a typical experiment detail view.

Atomic Deduplication

The tracking endpoint uses PostgreSQL’s ON CONFLICT pattern to handle race conditions:

const result = await query(
  `INSERT INTO events (experiment_id, variant_id, user_key, event_type, props)
   VALUES ($1, $2, $3, $4, $5)
   ON CONFLICT (experiment_id, user_key, event_type)
   DO NOTHING
   RETURNING id, occurred_at`,
  [experimentId, variantId, userKey, eventType, props]
);

if (result.rows.length === 0) {
  return res.json({ success: true, duplicate: true });
}

Statistical Edge Cases

The lift calculation handles division-by-zero cases explicitly:

function calculateLift(controlRate, variantRate) {
  if (controlRate === 0 && variantRate === 0) return 0;
  if (controlRate === 0 && variantRate > 0) return null; // undefined lift
  return ((variantRate - controlRate) / controlRate) * 100;
}

When lift is null, the system shows “Lift undefined (control conversion rate is 0)” instead of hiding the issue.


Tech Stack

  • Node.js + Express for the backend
  • PostgreSQL with carefully designed constraints
  • EJS for server-rendered templates (no React build complexity)
  • Chart.js for visualization
  • Docker Compose for local development
  • GitHub Actions for CI

Repository

Check out the full implementation at github.com/Keya-Moradi/Signal-Dash

The repo includes:

  • Complete documentation (PRD, SOP, metrics spec, ADRs)
  • Working seed data with realistic scenarios
  • Unit tests for statistical calculations
  • Interview talk track with Q&A prep

Tags: #ExperimentOps #ABTesting #Experimentation #ProductManagement #TPM #Analytics #SQL #PostgreSQL #NodeJS #ExpressJS #AI #LLM #DataIntegrity #SoftwareEngineering