~/experience/bell_media

Bell Media

Software Engineer II · Toronto, Canada · 2024 - Present

Bell Media Crave

Image: Bell Media Crave

Bell Media runs Canada's largest streaming platform, Crave — used by more than 4 million subscribers. I joined the video-processing pipeline team to help modernize a system that had grown organically over years, carrying the weight of decisions made by developers long gone.

Inheriting the Garden

Legacy code is like a garden that's been watered but never pruned — it grows, but not always in the right direction.

When I arrived, the codebase had no tests, no structure, and a lot of tribal knowledge that had walked out the door with the engineers who wrote it. The existing API structure was struggling to handle the data throughput we needed, and the database driver was still on pq — an older PostgreSQL driver that was limiting what we could do at the infrastructure level.

My first task was simply to understand what we had before trying to improve it. That meant reading code, tracing data flows, and asking a lot of questions nobody fully had the answers to.

js
/*  Example of a multi-call API endpoint that was common 
 *  before the query-API refactor. Each endpoint had its 
 *  own SQL query, and the frontend had to make multiple 
 *  calls to stitch together the data it needed.
 */

const getRecords = async (filters) => {
    const res1 = await fetch('/api-endpoint-1', { 
        method:'POST',
        body: JSON.stringify({ filters })
    });
    const ids = await res1.json();

    const res2 = await fetch('/api/endpoint-2',{
        method:'POST',
        body: JSON.stringify(ids)
    });
    const recordDetails = await res2.json();

    const res3 = await fetch('/api/endpoint-3',{
        method:'POST',
        body: JSON.stringify(recordDetails.map(d => d.relatedId))
    });
    const relations = await res3.json();

    return recordDetails.map((details, i) => ({
        ...details,
        relations: relations[i]
    }));
}

Building the Query-API

Simplicity on the surface often hides enormous complexity underneath — and that's exactly the point.

One of the biggest pain points was how clients queried the database. There was no standard contract — each endpoint was a bespoke SQL query, and the frontend had to make multiple round trips to get what it needed. I designed and built a MongoDB-like query language on top of our PostgreSQL database, giving clients a unified, expressive way to request exactly the data they needed.

The result: API calls cut in half. Clients could now compose complex queries across all database tables without needing new endpoints every time product requirements changed. This also removed a class of bugs that came from duplicated query logic scattered across the codebase.


Sample Query

ts
const fetchPremiumUsers = async () => {
  const query = {
    "active": true,
    "age": { "$gte": 18, "$lte": 65 },
    "plan": { "$in": ["pro", "enterprise"] },
    "email": { "$exists": true },
    "name": { "$startsWith": "J" },
    "$join":    { "rel": "orders", "as": "o", "type": "left" },
    "$select":  ["id", "name", "email", "o.total"],
    "$orderBy": [{ "created_at": "desc" }],
    "$limit":   20
  };

  const res = await fetch('/api/query', {
    method: 'POST',
    body: JSON.stringify(query)
  });

  return res.json();
}

Query-API Flow

The Data Orchestrator

Real-time systems are only as good as the guarantees they make to the clients listening.

The original WebSocket system wasn't built to handle the volume or complexity of system events flowing through the pipeline. We needed something that could poll for data changes, notify clients in real time, and not fall over under load.

I built the Data Orchestrator — a polling engine combined with Server-Sent Events (SSE), a subscription model, and an in-memory cache. Rather than every client independently hammering the database, the orchestrator became the single source of truth: it polled, cached, and pushed updates out.

The latency improvement came from eliminating redundant work. Previously, each connected client would trigger its own database reads on demand. With the orchestrator in place, data was fetched once per interval, diff-checked against an MD5 hash, and only broadcast when something actually changed — meaning clients received updates without generating load. This architectural shift drove a ~20% improvement in system latency.


Data Orchestrator Flow

Migrating the Foundation

The most dangerous code to change is the code nobody remembers writing.

Migrating the database driver from pq to pgx is the kind of task that sounds routine until you're doing it on a production system with zero test coverage. There's no safety net — just careful reading, methodical changes, and a lot of cross-referencing against live behavior.

pgx has different interfaces for struct scanning that broke several of our existing patterns. Fixing those wasn't just busywork — it eliminated a class of unnecessary allocations that had been accumulating quietly, resulting in measurably reduced memory usage for the Go application at scale.

Parallel UIs: Vue 2 → Vue 3

You can't replace the floor while people are standing on it — but you can build the new one underneath.

The frontend was still on Vue 2, and a full cutover wasn't an option. Users couldn't experience downtime, and product work couldn't stop while we migrated. So instead of a big-bang rewrite, I built an iframe event bridge — a communication layer that let the old and new UI coexist and exchange state during the transition.

Alongside that, I built out Pinia stores, centralized error handling, a notification system, and a shared routing layer. Each piece of the old UI was replaced incrementally, with the bridge keeping both generations in sync until the old one could be retired cleanly.

  • Go
  • TypeScript/JavaScript
  • Vue 2/3
  • PostgreSQL
  • Tailwind CSS
  • Gitlab
  • Docker