Skip to main content

Systems Architecture

This document provides a deeper look at how the ad server is structured and how requests flow through the system.

For New Developers

This page explains the complete system architecture. If you're just getting started, focus on the Ad Request Flow section first, then explore individual components.

Component Overview

OpenAdServe is a Go application with a clean separation between data storage, operational state, and real-time decisioning.

Core Data Stores

📦 PostgreSQL

Source of Truth
Campaigns, line items, placements, and creatives

⚡ Redis

Operational Counters
Frequency caps, pacing, rate limits

🧠 In-Memory Cache

Fast Lookups
All campaign data loaded for sub-ms access

📊 ClickHouse

Analytics Events
Impressions, clicks, custom events

Supporting Services

Optional Services & Integrations

CTR Predictor Service (optional)

  • Machine learning-based CTR predictions for CPC optimization
  • Logistic regression trained on historical click data
  • Provides eCPM boost for competitive ranking

GeoIP Database

  • Country and region lookup based on IP address
  • Includes GeoLite2 database under data/
  • Automatic targeting enrichment

Rate Limiter

  • In-memory token buckets per line item
  • Prevents request floods on direct campaigns
  • Configurable capacity and refill rates

Distributed Tracing

  • OpenTelemetry instrumentation for requests, DB, Redis
  • Sent to Grafana Tempo
  • Automatic trace ID injection in logs

Monitoring Stack

  • Prometheus: Metrics collection and storage
  • Grafana: Visualization and trace exploration
  • Loki: Log aggregation with trace correlation

Startup Sequence

  1. cmd/server/main.go initialises logging and reads environment variables.
  2. If tracing is enabled (TRACING_ENABLED=true), OpenTelemetry is initialized with Tempo endpoint configuration and automatic instrumentation is applied to HTTP handlers, database connections, and Redis operations.
  3. Connections are established to Redis, ClickHouse and Postgres with OpenTelemetry instrumentation applied.
  4. The InMemoryAdDataStore is initialized.
  5. Campaigns, line items, and placements are loaded from Postgres and stored in memory.
  6. The rate limiter is configured with token bucket parameters from environment variables.
  7. The HTTP server is started with handlers defined in internal/api and middleware chain including automatic trace ID injection into all logs.

Ad Request Flow

Request Processing Time

Typical response times are under 100ms on modern hardware with low-latency Redis (1-2ms). Actual performance varies based on:

  • Number of candidate line items being evaluated
  • Redis network latency for frequency/pacing checks
  • GeoIP database lookup complexity
  • Whether CTR optimization is enabled for CPC campaigns

Step-by-step breakdown:

  1. Request Receipt

    • Client sends POST /ad in minimal OpenRTB format
    • GetAdHandler parses request and enriches with context
  2. Context Enrichment

    • User-Agent parsing for device/OS/browser detection
    • GeoIP lookup for country and region targeting
    • Record ad_request analytics event
  3. Candidate Selection

    • Pluggable selectors.Selector (default: RuleBasedSelector) processes request
    • Loads campaign data from in-memory AdDataStore
  4. Single-Pass Filtering

    • Targeting: Device, OS, browser, geo, custom key/values
    • Format: Placement size and creative format checks
    • Status: Line item active state validation
    • Rate Limiting: Token bucket checks for direct campaigns
    • Frequency Capping: Redis-backed user impression limits
    • Pacing: Dual-counter budget distribution checks
    Why single-pass filtering is faster

    Traditional ad servers apply filters sequentially, requiring multiple iterations through candidate lists. OpenAdServe's single-pass filter evaluates all criteria in one loop, delivering 3x faster filtering with better CPU cache utilization.

  5. CTR Optimization (optional)

    • Query CTR predictor service for CPC line items
    • Apply eCPM boost based on predicted click probability
    • Normalize CPC and CPM bids for fair competition
  6. Ranking & Auction

    • Group candidates by priority tier (1-10)
    • Sort by eCPM within each priority
    • Fetch programmatic bids concurrently if enabled
  7. Winner Selection

    • Highest priority, highest eCPM wins
    • Wrap in OpenRTBResponse format
    • Generate signed tokens for impression, click, event URLs
  8. Response & Tracking

    • Write ad_served event to ClickHouse
    • Increment serve counter in Redis (pacing decision tracking)
    • Return ad response to client
    • Optional: Debug traces with debug=1 parameter

Impression and Click Tracking

Dual-Counter System:

Why Two Counters?

The system maintains separate serve and impression counters to handle pixel tracking delays. The serve counter increments immediately for pacing decisions, while the impression counter updates when pixels fire for accurate billing.

Learn more about pacing →

Endpoints:

  • GET /impression

    • Verifies signed token for security
    • Records impression event in ClickHouse
    • Updates impression counter in Redis (billing accuracy)
    • Returns 1×1 GIF tracking pixel
  • GET /click

    • Verifies signed token
    • Records click event in ClickHouse
    • Updates spend for CPC line items
    • Redirects to advertiser landing page

Custom Events

Track any user interaction beyond impressions and clicks.

GET /event?type=video_complete&token={signed_token}

How it works:

  1. Verifies signed token for security
  2. Validates event type against allowlist (internal/api/event.go)
  3. Records event to ClickHouse
  4. Increments Redis counters for pacing/billing

Common use cases:

  • Video quartile tracking (start, 25%, 50%, 75%, complete)
  • Engagement metrics (hover, scroll, expand)
  • Conversion tracking (signup, purchase, download)

Learn more about custom events →

Ad Reporting

User-facing quality control for problematic ads.

POST /report
{
"token": "{signed_token}",
"reason": "inappropriate_content",
"comments": "Misleading product claims"
}
Publisher-First Feature

This endpoint empowers publishers to maintain ad quality and protect user experience. Reports aggregate in PostgreSQL for moderation workflows while analytics events track trends.

Workflow:

  1. User clicks "Report Ad" in SDK
  2. Token verification ensures legitimate reports
  3. Reason validation against predefined categories
  4. Storage in PostgreSQL for moderation queue
  5. Analytics event recorded for trend analysis

Learn more about ad reporting →

Data Synchronization

Keeping campaign data fresh:

The single-instance architecture uses an in-memory store that syncs from PostgreSQL:

Manual Reload:

POST /reload

Immediately refreshes campaign data from PostgreSQL without downtime.

Automatic Reload:

RELOAD_INTERVAL=30s

Configures periodic background sync from PostgreSQL.

Single-Instance Design

Campaign data lives in memory on a single server. This provides excellent performance but limits horizontal scaling. For distributed deployments, contact the maintainer.

See scaling considerations →

High Level Diagram

Request Flow:

  1. Ad Request → Parse → GeoIP lookup → Selector (reads from AdDataStore) applies optimized single-pass filters → Generate signed URLs → Return ad
  2. Tracking → Verify token → Record event in ClickHouse → Update Redis counters
  3. Reporting → Verify token → Store report in PostgreSQL → Record analytics event
  4. Data Sync → Use /reload endpoint or automatic reloads via RELOAD_INTERVAL.
  5. Observability → Export metrics to Prometheus → Send traces to Tempo → All logs include trace IDs → Visualize in Grafana with trace-to-logs correlation

Core Systems

  • Ad Decisioning - Detailed explanation of the ad selection algorithm and filtering process
  • Pacing System - Dual-counter system for budget distribution and billing accuracy

Data Storage

  • Analytics - ClickHouse event storage and query patterns for reporting
  • Data Stores - PostgreSQL, Redis, and ClickHouse configuration and optimization

Integration

  • API Reference - Complete endpoint documentation for all system interactions
  • Integration Guide - JavaScript SDK and server-to-server integration patterns