Skip to main content

Architecture Overview

Deep dive into OpenAdServe's technical architecture. This section explains how the system makes ad decisions, manages data, and handles high-traffic scenarios with optimized filtering and intelligent resource management.

Core Architecture Documents

Understanding the system's design and component interactions.

  • Ad Decisioning - Ad selection algorithm and ranking logic

    • Optimized single-pass filtering for 6.3x performance improvement
    • Priority-first, eCPM-optimized selection
    • Targeting, frequency capping, pacing, and rate limiting
    • Programmatic bid integration
    • CPC optimization with CTR estimation
  • Data Stores - Storage layer design and data flow

    • PostgreSQL: Source of truth for campaigns and configuration
    • Redis: Operational counters (frequency caps, pacing, rate limits)
    • ClickHouse: Analytics events and reporting
    • In-memory ad data store for sub-millisecond lookups
  • Pacing System - Budget and delivery management

    • ASAP, Even, and PID-controlled pacing algorithms
    • Daily budget and impression caps
    • Redis-backed counters with automatic reset
    • Graceful degradation when Redis is unavailable
  • Rate Limiting - QPS protection for direct line items

    • Token bucket algorithm per line item
    • Configurable capacity and refill rates
    • Prevents budget exhaustion from traffic spikes
    • Optional per-line-item configuration
  • Multi-Tenancy - Publisher isolation and security

    • API key authentication per publisher
    • Data segregation in PostgreSQL
    • Resource isolation strategies
    • Cross-publisher campaign support

System Architecture Overview

Component Interaction

┌─────────────────────────────────────────────────────────────┐
│ Client Request │
└─────────────────────┬───────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ API Layer (/ad endpoint) │
│ • Request validation │
│ • GeoIP lookup │
│ • Publisher authentication │
└─────────────────────┬───────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ Ad Decisioning (RuleBasedSelector) │
│ • Single-pass filter (targeting, size, format) │
│ • Rate limiting check (Redis) │
│ • Frequency capping check (Redis) │
│ • Pacing control (Redis counters) │
│ • Programmatic bid integration │
│ • CTR optimization for CPC campaigns │
│ • Priority + eCPM ranking │
└─────────────────────┬───────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────┐
│ Data Stores │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌─────────────────┐ │
│ │ PostgreSQL │ │ Redis │ │ ClickHouse │ │
│ │ │ │ │ │ │ │
│ │ Campaigns │ │ Frequency │ │ Impressions │ │
│ │ Line Items │ │ Pacing │ │ Clicks │ │
│ │ Creatives │ │ Rate Limits │ │ Custom Events │ │
│ │ Placements │ │ │ │ Ad Requests │ │
│ └──────────────┘ └──────────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────────────┘

Request Flow

  1. Client Request → API validates credentials and parses OpenRTB-like JSON
  2. GeoIP Lookup → Enriches request with country data for targeting
  3. Single-Pass Filter → Optimized filtering applies all criteria in one loop
  4. Operational Checks → Redis queries for rate limits, frequency caps, pacing
  5. Programmatic Integration → Fetch competing bids for programmatic line items
  6. eCPM Calculation → Normalize CPM/CPC to effective CPM for ranking
  7. Priority Ranking → Group by priority, sort by eCPM within tier
  8. Winner Selection → Return highest-ranked eligible creative
  9. Event Tracking → Generate signed URLs for impression/click tracking
  10. Analytics Recording → ClickHouse async insert for real-time reporting

Data Synchronization Strategy

Campaign Data (PostgreSQL → In-Memory)

  • Auto-reload: Configurable interval via RELOAD_INTERVAL (default: 30s)
  • Manual reload: /reload endpoint for immediate synchronization
  • Single-instance model: All campaign data in memory for fast lookups
  • No distributed cache: Horizontal scaling requires custom solutions

Operational Counters (Redis Real-Time)

  • Frequency caps: Per-user impression counts with TTL
  • Pacing counters: Daily impression/spend tracking with midnight reset
  • Rate limiting: Token bucket state per line item
  • Graceful degradation: System continues without Redis (disables features)

Analytics (ClickHouse Async Insert)

  • Event recording: Impressions, clicks, custom events, ad requests
  • Async inserts: Non-blocking writes with automatic batching
  • Query optimization: Indexed columns for registered dimensions
  • Retention policies: Configurable TTL for event data

Performance Optimizations

Single-Pass Filtering

Traditional ad servers apply filters sequentially (targeting → size → frequency → pacing), requiring multiple iterations through the candidate set. OpenAdServe's single-pass filter evaluates all criteria in one loop, delivering 6.3x faster filtering.

Benefits:

  • Reduced CPU cycles per request
  • Lower memory allocations
  • Better cache locality
  • Predictable latency profile

In-Memory Ad Data Store

Campaign configuration loads entirely into memory on startup and reload intervals. This eliminates database queries during ad selection, enabling sub-millisecond response times.

Trade-offs:

  • Performance: Sub-millisecond ad selection
  • Simplicity: Single-instance deployment model
  • Scalability: Limited by single-server memory (suitable for most publishers)

Redis Operational Counters

Frequency caps, pacing, and rate limits require fast read/write operations across requests. Redis provides:

  • Low latency: Sub-millisecond counter updates
  • Atomic operations: INCR, DECR, EXPIRE for race-free updates
  • TTL support: Automatic cleanup for frequency caps and daily resets

Scalability Considerations

Add CPU/RAM to handle more campaigns and higher QPS:

  • Memory: ~100 MB per 1,000 active line items
  • CPU: Optimized filtering supports 10,000+ QPS on modern CPUs

Horizontal Scaling (Custom Solution Required)

The in-memory data store and Redis counters require distributed coordination:

  • Option 1: Sticky sessions to pin publishers to instances
  • Option 2: Distributed cache (e.g., Redis) for campaign data
  • Option 3: Custom sharding by publisher ID

Contact the project maintainer for scaling guidance beyond single-instance deployment.