Skip to main content

Operations Overview

Guides for operating and managing OpenAdServe in development and production environments. This section covers local development workflows, campaign trafficking, and common operational tasks.

Operational Guides

  • Development - Local setup and development workflows

    • Docker Compose setup
    • Running tests
    • Building and running locally
    • GeoIP database setup
    • Pre-commit hooks and linting
    • Code quality tools (goimports, golangci-lint)
    • Common development commands
  • Trafficking - Campaign and line item management

    • Admin UI overview (development tool only)
    • Entity setup workflow (Publishers → Placements → Campaigns → Line Items → Creatives)
    • Creative formats (HTML, banner, native)
    • Targeting configuration and dimension validation
    • Budget and pacing settings
    • Programmatic line item setup

Common Operational Workflows

Initial Development Setup

Get OpenAdServe running locally in minutes:

# 1. Clone repository
git clone https://github.com/your-org/openadserve.git
cd openadserve

# 2. Start infrastructure with Docker Compose
docker compose up -d

# 3. Generate demo data
docker compose exec openadserve go run ./tools/fake_data

# 4. Test ad request
curl -X POST http://localhost:8787/ad \
-H "X-API-Key: demo123" \
-H "Content-Type: application/json" \
-d '{
"id": "req-123",
"imp": [{"id": "imp-1", "tagid": "header"}],
"user": {"id": "user-456"},
"ext": {"publisher_id": 1}
}'

Campaign Setup Workflow

Standard trafficking process for new campaigns:

  1. Create Publisher (if new)

    • Navigate to http://localhost:8787/static/admin.html
    • Add publisher name and domain
    • Note the generated API key
  2. Define Placements

    • Create placement IDs (e.g., "header", "sidebar")
    • Set dimensions (width × height)
    • Choose formats: html, banner, native
  3. Create Campaign

    • Select publisher
    • Set campaign name and flight dates
    • Configure budget
  4. Add Line Items

    • Set budget type: CPM, CPC, or Flat
    • Configure pricing (CPM/CPC value)
    • Set priority: 1 (high), 2 (medium), 3 (low)
    • Add targeting criteria:
      • Geography (countries)
      • Device types (mobile, desktop, tablet)
      • Registered dimensions (validated against schema)
      • Custom key-values (flexible targeting)
    • Configure pacing: ASAP, Even, or PID
    • Set daily caps (impressions/spend)
    • Enable rate limiting (optional)
  5. Upload Creatives

    • Select placement and line item
    • Choose creative format
    • Provide content:
      • HTML: Raw markup
      • Banner: JSON with image URL, click URL, dimensions
      • Native: JSON with title, body, image, click URL
    • Set advertiser domain

Testing and Validation

Verify campaign configuration before going live:

# 1. Simulate traffic
go run ./tools/traffic_simulator \
-publisher-id=1 \
-api-key=demo123 \
-qps=10 \
-duration=5m

# 2. Check campaign performance
go run ./tools/campaign_report \
-campaign-id=1 \
-days=1

# 3. Query specific events
go run tools/query_events/main.go -id=req-12345

# 4. Monitor metrics
curl http://localhost:8787/metrics | grep openadserve

Campaign Optimization

Monitor and adjust campaign delivery:

Check Delivery Pacing

# View pacing metrics in Grafana or query ClickHouse
SELECT
toDate(timestamp) AS date,
COUNT(*) AS impressions,
SUM(cost) AS spend
FROM events
WHERE campaign_id = 123
AND event_type = 'impression'
GROUP BY date
ORDER BY date

Adjust Targeting

  1. Review campaign reports to identify underdelivery
  2. Update line item targeting in admin UI
  3. Reload campaign data: curl -X POST http://localhost:8787/reload
  4. Monitor delivery improvement

Optimize CPC Campaigns

# Check CTR performance
SELECT
line_item_id,
countIf(event_type = 'impression') AS impressions,
countIf(event_type = 'click') AS clicks,
clicks / impressions * 100 AS ctr,
avg(cost) AS avg_cpm
FROM events
WHERE campaign_id = 123
AND timestamp >= now() - INTERVAL 7 DAY
GROUP BY line_item_id

Production Operations

Deploy Configuration Changes

# 1. Update environment variables
vim .env.production

# 2. Restart service (Docker Compose example)
docker compose down
docker compose up -d

# 3. Verify health
curl http://localhost:8787/health

Reload Campaign Data

Campaign changes in PostgreSQL require reload:

# Manual reload
curl -X POST http://localhost:8787/reload

# Or use auto-reload
RELOAD_INTERVAL=30s # Set in environment

Monitor System Health

# Check Prometheus metrics
curl http://localhost:8787/metrics

# Key metrics:
# - openadserve_ad_requests_total
# - openadserve_ad_selection_duration_seconds
# - openadserve_impressions_total
# - openadserve_clicks_total

# View distributed traces in Grafana/Tempo
# Query by trace ID from logs

Database Maintenance

# PostgreSQL backup
pg_dump -h localhost -U postgres adserver > backup.sql

# ClickHouse optimization
# Optimize partitions periodically for query performance
docker compose exec clickhouse clickhouse-client -q \
"OPTIMIZE TABLE events FINAL"

# Redis memory management
# Monitor memory usage
docker compose exec redis redis-cli INFO memory

Troubleshooting Common Issues

No Ads Served

Possible causes:

  1. Targeting too restrictive
  2. Budget exhausted
  3. Frequency cap reached
  4. Pacing limiting delivery
  5. Rate limit triggered

Debugging steps:

# Enable debug tracing
DEBUG_TRACE=true

# Check logs for filter rejections
docker compose logs openadserve | grep "filtered out"

# Query available line items
psql -h localhost -U postgres adserver -c \
"SELECT id, name, active, budget_remaining FROM line_items WHERE active = true"

High Latency

Possible causes:

  1. Database connection pool exhaustion
  2. ClickHouse async insert backlog
  3. Programmatic bidder timeouts
  4. Redis connection issues

Debugging steps:

# Check connection pool metrics
curl http://localhost:8787/metrics | grep db_connections

# Review trace spans in Grafana
# Look for slow database queries or external HTTP calls

# Tune connection pools
DB_MAX_OPEN_CONNS=50
CH_MAX_OPEN_CONNS=200

Uneven Delivery (Even Pacing)

Possible causes:

  1. Traffic patterns vary significantly
  2. PID controller needs tuning
  3. Daily cap too restrictive

Adjustments:

# Switch to PID pacing for better control
# Update line item pacing to "PID"

# Tune PID parameters
PID_KP=2.0 # Increase for faster response
PID_KI=0.5 # Increase to eliminate steady-state error
PID_KD=0.2 # Increase to reduce overshoot

Dimension Validation Failures

Possible causes:

  1. Unregistered dimension in targeting
  2. Invalid enum value
  3. Number out of range

Resolution:

# Check dimension configuration
cat config/dimensions.yaml

# Regenerate ClickHouse schema if changed
go run tools/generate_clickhouse_schema/main.go \
-config config/dimensions.yaml \
-output migrations/dimensions.sql

# Apply schema updates
docker compose exec clickhouse clickhouse-client < migrations/dimensions.sql

# Reload campaign data
curl -X POST http://localhost:8787/reload

Development Best Practices

Pre-commit Hooks

Install automated code quality checks:

# One-time setup
pre-commit install

# Pre-commit runs automatically on commit:
# - goimports: Format and organize imports
# - go vet: Static analysis
# - golangci-lint: Comprehensive linting
# - go mod tidy: Dependency cleanup
# - go test: Run all tests

Testing Strategy

# Run unit tests
go test ./...

# Run with coverage
go test -cover ./...

# Run specific test suite
go test ./internal/logic/selectors/

# Benchmark critical paths
go test -bench=. ./internal/logic/filters/

Code Quality

Follow linting rules to avoid pre-commit failures:

// Always check errors
if err := doThing(); err != nil {
return fmt.Errorf("doThing failed: %w", err)
}

// Or explicitly ignore if truly unnecessary
_ = db.Close()

// Remove unused code or prefix with underscore
func _temporarilyUnused() {
// ...
}

Operational Metrics

Key Performance Indicators

Monitor these metrics for healthy operation:

  • QPS: Requests per second to /ad endpoint
  • Fill Rate: ad_served / ad_request ratio
  • P50/P95/P99 Latency: Ad selection response times
  • Error Rate: 4xx/5xx responses per second
  • Database Connection Pool: Utilization percentage
  • Redis Memory: Used memory vs available

Alert Thresholds

Recommended alert conditions:

# Prometheus alerting rules
- alert: HighErrorRate
expr: rate(openadserve_errors_total[5m]) > 10
for: 5m

- alert: HighLatency
expr: histogram_quantile(0.95, openadserve_ad_selection_duration_seconds) > 0.1
for: 5m

- alert: LowFillRate
expr: rate(openadserve_ad_served_total[5m]) / rate(openadserve_ad_requests_total[5m]) < 0.8
for: 10m