Systems Architecture
This document provides a deeper look at how the ad server is structured and how requests flow through the system. It is intended for readers who are new to building ad servers and want to understand the moving pieces in this codebase.
Components
The server is a Go application that relies on a small set of external services:
- PostgreSQL serves as the primary source of truth for storing placements, creatives, line items, and campaigns.
- Redis is used for operational data only, including frequency capping, pacing counters, and rate limiting.
- In-Memory Store loads all campaign data from PostgreSQL into memory on startup for fast access in this single-instance architecture.
- ClickHouse stores analytics events such as impressions, clicks and custom events. The table schema lives in
internal/analytics/clickhouse.go. - CTR Predictor Service (optional) provides machine learning-based CTR predictions for CPC line item optimization using logistic regression trained on historical click data.
- GeoIP database provides country and region lookup based on the user's IP address. The repository includes a GeoLite2 database under
data/. - Rate Limiter uses in-memory token buckets to prevent request floods on direct line items. Each line item gets its own bucket with configurable capacity and refill rate.
- Distributed Tracing uses OpenTelemetry to instrument HTTP requests, database queries, and Redis operations. Traces are sent to Grafana Tempo with automatic trace ID injection into all application logs.
- Prometheus collects metrics exported by
internal/observability, including rate limiting statistics. - Grafana visualises metrics collected by Prometheus and provides trace exploration via Tempo integration. Trace-to-logs correlation allows seamless debugging from traces to related log entries.
Startup Sequence
cmd/server/main.goinitialises logging and reads environment variables.- If tracing is enabled (
TRACING_ENABLED=true), OpenTelemetry is initialized with Tempo endpoint configuration and automatic instrumentation is applied to HTTP handlers, database connections, and Redis operations. - Connections are established to Redis, ClickHouse and Postgres with OpenTelemetry instrumentation applied.
- The
InMemoryAdDataStoreis initialized. - Campaigns, line items, and placements are loaded from Postgres and stored in memory.
- The rate limiter is configured with token bucket parameters from environment variables.
- The HTTP server is started with handlers defined in
internal/apiand middleware chain including automatic trace ID injection into all logs.
Ad Request Flow
- A client sends a
POST /adrequest in a minimal OpenRTB format. GetAdHandlerparses the request, resolves theTargetingContextusing the user agent and GeoIP lookup and records anad_requestanalytics event.- The pluggable
selectors.Selector(by defaultRuleBasedSelector) receives the request parameters along with the in-memoryAdDataStore. - The selector applies an optimized single-pass filter defined in
internal/logic/filters:- targeting checks (device, OS, browser, geo and custom key/values)
- placement size and format checks
- line item active state
- rate limiting for direct line items (token bucket algorithm)
- frequency capping and dual-counter pacing using Redis (uses efficient batch operations)
- For CPC line items with CTR optimization enabled, the system queries the CTR predictor service for context-aware click probability predictions and applies boost multipliers to eCPM calculations.
- Remaining candidates are ranked by priority and eCPM. Programmatic line items can fetch external bids concurrently.
- The winning creative is wrapped in an
OpenRTBResponse. A signed token is generated viainternal/tokenand embedded in the impression, click and event URLs. - An
ad_servedanalytics event is written to ClickHouse, the serve counter is immediately incremented in Redis for pacing decisions, and the response is returned to the client. Optional debug traces show intermediate creative IDs whendebug=1is enabled.
Impression and Click Tracking
GET /impressionverifies the token, records animpressionevent in ClickHouse and updates the impression counter in Redis for accurate billing (separate from the serve counter used for pacing). A 1×1 GIF is returned.GET /clickperforms the same token verification, records aclickevent and updates spend for CPC line items.
Custom Events
GET /event?type=... allows publishers to track additional engagement signals. The endpoint verifies the token, checks the event type against the allowlist in internal/api/event.go and records the event. Redis counters are incremented for the line item so pacing or billing logic can consume them later.
Ad Reporting
POST /report enables users to flag problematic ads for moderation. The endpoint verifies the token, validates the report reason, and stores the report in PostgreSQL. This publisher-first feature gives publishers direct control over ad quality and user experience. Analytics events are also recorded in ClickHouse for monitoring report trends and patterns.
Data Synchronization
The single-instance architecture uses an in-memory store that can be refreshed from PostgreSQL:
- The
/reloadendpoint refreshes the in-memory data from PostgreSQL at runtime. - An automatic, periodic reload can be configured via the
RELOAD_INTERVALenvironment variable.
High Level Diagram
Request Flow:
- Ad Request → Parse → GeoIP lookup → Selector (reads from AdDataStore) applies optimized single-pass filters → Generate signed URLs → Return ad
- Tracking → Verify token → Record event in ClickHouse → Update Redis counters
- Reporting → Verify token → Store report in PostgreSQL → Record analytics event
- Data Sync → Use
/reloadendpoint or automatic reloads viaRELOAD_INTERVAL. - Observability → Export metrics to Prometheus → Send traces to Tempo → All logs include trace IDs → Visualize in Grafana with trace-to-logs correlation
Related Topics
Core Systems
- Ad Decisioning - Detailed explanation of the ad selection algorithm and filtering process
- Pacing System - Dual-counter system for budget distribution and billing accuracy
Data Storage
- Analytics - ClickHouse event storage and query patterns for reporting
- Data Stores - PostgreSQL, Redis, and ClickHouse configuration and optimization
Integration
- API Reference - Complete endpoint documentation for all system interactions
- Integration Guide - JavaScript SDK and server-to-server integration patterns