Systems Architecture
This document provides a deeper look at how the ad server is structured and how requests flow through the system.
This page explains the complete system architecture. If you're just getting started, focus on the Ad Request Flow section first, then explore individual components.
Component Overview
OpenAdServe is a Go application with a clean separation between data storage, operational state, and real-time decisioning.
Core Data Stores
📦 PostgreSQL
Source of Truth
Campaigns, line items, placements, and creatives
⚡ Redis
Operational Counters
Frequency caps, pacing, rate limits
🧠 In-Memory Cache
Fast Lookups
All campaign data loaded for sub-ms access
📊 ClickHouse
Analytics Events
Impressions, clicks, custom events
Supporting Services
Optional Services & Integrations
CTR Predictor Service (optional)
- Machine learning-based CTR predictions for CPC optimization
- Logistic regression trained on historical click data
- Provides eCPM boost for competitive ranking
GeoIP Database
- Country and region lookup based on IP address
- Includes GeoLite2 database under
data/ - Automatic targeting enrichment
Rate Limiter
- In-memory token buckets per line item
- Prevents request floods on direct campaigns
- Configurable capacity and refill rates
Distributed Tracing
- OpenTelemetry instrumentation for requests, DB, Redis
- Sent to Grafana Tempo
- Automatic trace ID injection in logs
Monitoring Stack
- Prometheus: Metrics collection and storage
- Grafana: Visualization and trace exploration
- Loki: Log aggregation with trace correlation
Startup Sequence
cmd/server/main.goinitialises logging and reads environment variables.- If tracing is enabled (
TRACING_ENABLED=true), OpenTelemetry is initialized with Tempo endpoint configuration and automatic instrumentation is applied to HTTP handlers, database connections, and Redis operations. - Connections are established to Redis, ClickHouse and Postgres with OpenTelemetry instrumentation applied.
- The
InMemoryAdDataStoreis initialized. - Campaigns, line items, and placements are loaded from Postgres and stored in memory.
- The rate limiter is configured with token bucket parameters from environment variables.
- The HTTP server is started with handlers defined in
internal/apiand middleware chain including automatic trace ID injection into all logs.
Ad Request Flow
Typical response times are under 100ms on modern hardware with low-latency Redis (1-2ms). Actual performance varies based on:
- Number of candidate line items being evaluated
- Redis network latency for frequency/pacing checks
- GeoIP database lookup complexity
- Whether CTR optimization is enabled for CPC campaigns
Step-by-step breakdown:
-
Request Receipt
- Client sends
POST /adin minimal OpenRTB format GetAdHandlerparses request and enriches with context
- Client sends
-
Context Enrichment
- User-Agent parsing for device/OS/browser detection
- GeoIP lookup for country and region targeting
- Record
ad_requestanalytics event
-
Candidate Selection
- Pluggable
selectors.Selector(default:RuleBasedSelector) processes request - Loads campaign data from in-memory
AdDataStore
- Pluggable
-
Single-Pass Filtering ⚡
- Targeting: Device, OS, browser, geo, custom key/values
- Format: Placement size and creative format checks
- Status: Line item active state validation
- Rate Limiting: Token bucket checks for direct campaigns
- Frequency Capping: Redis-backed user impression limits
- Pacing: Dual-counter budget distribution checks
Why single-pass filtering is faster
Traditional ad servers apply filters sequentially, requiring multiple iterations through candidate lists. OpenAdServe's single-pass filter evaluates all criteria in one loop, delivering 3x faster filtering with better CPU cache utilization.
-
CTR Optimization (optional)
- Query CTR predictor service for CPC line items
- Apply eCPM boost based on predicted click probability
- Normalize CPC and CPM bids for fair competition
-
Ranking & Auction
- Group candidates by priority tier (1-10)
- Sort by eCPM within each priority
- Fetch programmatic bids concurrently if enabled
-
Winner Selection
- Highest priority, highest eCPM wins
- Wrap in
OpenRTBResponseformat - Generate signed tokens for impression, click, event URLs
-
Response & Tracking
- Write
ad_servedevent to ClickHouse - Increment serve counter in Redis (pacing decision tracking)
- Return ad response to client
- Optional: Debug traces with
debug=1parameter
- Write
Impression and Click Tracking
Dual-Counter System:
The system maintains separate serve and impression counters to handle pixel tracking delays. The serve counter increments immediately for pacing decisions, while the impression counter updates when pixels fire for accurate billing.
Endpoints:
-
GET /impression- Verifies signed token for security
- Records
impressionevent in ClickHouse - Updates impression counter in Redis (billing accuracy)
- Returns 1×1 GIF tracking pixel
-
GET /click- Verifies signed token
- Records
clickevent in ClickHouse - Updates spend for CPC line items
- Redirects to advertiser landing page
Custom Events
Track any user interaction beyond impressions and clicks.
GET /event?type=video_complete&token={signed_token}
How it works:
- Verifies signed token for security
- Validates event type against allowlist (
internal/api/event.go) - Records event to ClickHouse
- Increments Redis counters for pacing/billing
Common use cases:
- Video quartile tracking (start, 25%, 50%, 75%, complete)
- Engagement metrics (hover, scroll, expand)
- Conversion tracking (signup, purchase, download)
Learn more about custom events →
Ad Reporting
User-facing quality control for problematic ads.
POST /report
{
"token": "{signed_token}",
"reason": "inappropriate_content",
"comments": "Misleading product claims"
}
This endpoint empowers publishers to maintain ad quality and protect user experience. Reports aggregate in PostgreSQL for moderation workflows while analytics events track trends.
Workflow:
- User clicks "Report Ad" in SDK
- Token verification ensures legitimate reports
- Reason validation against predefined categories
- Storage in PostgreSQL for moderation queue
- Analytics event recorded for trend analysis
Learn more about ad reporting →
Data Synchronization
Keeping campaign data fresh:
The single-instance architecture uses an in-memory store that syncs from PostgreSQL:
Manual Reload:
POST /reload
Immediately refreshes campaign data from PostgreSQL without downtime.
Automatic Reload:
RELOAD_INTERVAL=30s
Configures periodic background sync from PostgreSQL.
Campaign data lives in memory on a single server. This provides excellent performance but limits horizontal scaling. For distributed deployments, contact the maintainer.
High Level Diagram
Request Flow:
- Ad Request → Parse → GeoIP lookup → Selector (reads from AdDataStore) applies optimized single-pass filters → Generate signed URLs → Return ad
- Tracking → Verify token → Record event in ClickHouse → Update Redis counters
- Reporting → Verify token → Store report in PostgreSQL → Record analytics event
- Data Sync → Use
/reloadendpoint or automatic reloads viaRELOAD_INTERVAL. - Observability → Export metrics to Prometheus → Send traces to Tempo → All logs include trace IDs → Visualize in Grafana with trace-to-logs correlation
Related Topics
Core Systems
- Ad Decisioning - Detailed explanation of the ad selection algorithm and filtering process
- Pacing System - Dual-counter system for budget distribution and billing accuracy
Data Storage
- Analytics - ClickHouse event storage and query patterns for reporting
- Data Stores - PostgreSQL, Redis, and ClickHouse configuration and optimization
Integration
- API Reference - Complete endpoint documentation for all system interactions
- Integration Guide - JavaScript SDK and server-to-server integration patterns