Skip to content

zuoxingdong/rlbidder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

RLBidder

rlbidder

Reinforcement learning auto-bidding library for research and production.

PyPI Python PyTorch Lightning Quickstart

Overview β€’ Who Should Use This β€’ Installation β€’ Quickstart β€’ Benchmarks β€’ API β€’ Citation


πŸ“– Overview

rlbidder is a comprehensive toolkit for training and deploying reinforcement learning agents in online advertising auctions. Built for both industrial scale and research agility, it provides:

  • Complete offline RL pipeline: Rust-powered data processing (Polars) β†’ SOTA algorithms (IQL, CQL, DT, GAVE) β†’ parallel evaluation
  • Modern ML infrastructure: PyTorch Lightning multi-GPU training, experiment tracking, automated reproducibility
  • Production insights: Interactive dashboards for campaign monitoring, market analytics, and agent behavior analysis
  • Research rigor: Statistically robust benchmarking with RLiable metrics, tuned control baselines, and round-robin evaluation

Whether you're deploying bidding systems at scale or researching novel RL methods, rlbidder bridges the gap between academic innovation and production readiness.


🎯 Who Should Use rlbidder?

Researchers looking to experiment with SOTA offline RL algorithms (IQL, CQL, DT, GAVE, GAS) on realistic auction data with rigorous benchmarking.

AdTech Practitioners comparing RL agents against classic baselines (PID, BudgetPacer) before production deployment.


πŸš€ Key Features & What Makes rlbidder Different

rlbidder pushes beyond conventional RL libraries by integrating cutting-edge techniques from both RL research and modern LLM/transformer architectures. Here's what sets it apart:

Rust-Powered Data Pipeline

  • Standardized workflow: Scan Parquet β†’ RL Dataset β†’ Feature Engineering β†’ DT Dataset with reproducible artifacts at every stage
  • Polars Lazy API: Streaming data processing with a blazingly fast Rust engine that handles massive datasets without memory overhead
  • Scalable workflows: Process 100GB+ auction logs efficiently with lazy evaluation and zero-copy operations
  • Feature engineering: Drop-in scikit-learn-style transformers (Symlog, Winsorizer, ReturnScaledReward) for states, actions, and rewards

State-of-the-Art RL Algorithms

  • Comprehensive baselines: Classic control (Heuristic, BudgetPacer, PID) and learning-based methods (BC, CQL, IQL, DT, GAVE, GAS)
  • HL-Gauss Distributional RL: Smooth Gaussian-based distributional Q-learning for improved uncertainty quantification, advancing beyond standard categorical approaches
  • Efficient ensemble critics: Leverage torch.vmap for vectorized ensemble operationsβ€”much faster than traditional loop-based implementations
  • Numerically stable stochastic policies: DreamerV3-style SigmoidRangeStd and TorchRL-style BiasedSoftplus to avoid numerical instabilities from exp/log operations

Modern Transformer Stack (LLM-Grade)

  • FlashAttention (SDPA): Uses latest PyTorch scaled dot-product attention API for accelerated training
  • RoPE positional encoding: Rotary positional embeddings for improved sequence length generalization, adopted from modern LLMs
  • QK-Norm: Query-key normalization for enhanced training stability at scale
  • SwiGLU: Advanced feed-forward networks for superior expressiveness
  • Efficient inference: DTInferenceBuffer with deque-based temporal buffering for online Decision Transformer deployment

Simulated Online Evaluation & Visualization

  • Parallel evaluation: Multi-process evaluators with pre-loaded data per workerβ€”much faster than sequential benchmarking
  • Robust testing: Round-robin agent rotation with multi-seed evaluation for statistically reliable comparisons
  • Tuned competitors: Classic control methods (BudgetPacer, PID) with optimized hyperparameters as baselines
  • Interactive dashboards: Production-ready Plotly visualizations with market structure metrics (HHI, Gini, volatility) and RLiable metrics
  • Industrial analytics: Campaign health monitoring, budget pacing diagnostics, auction dynamics, and score distribution analysis

Modern ML Engineering Stack

  • Modular design: Enables both production readiness and rapid prototyping
  • PyTorch Lightning: Reduce boilerplate code, automatic mixed precision, gradient accumulation
  • Draccus configuration: Type-safe dataclass-to-CLI with hierarchical configs, dot-notation overrides, and zero boilerplate
  • Local experiment tracking: AIM for experiment management without external cloud dependencies

Comparison with AuctionNet

Feature AuctionNet rlbidder
Data Engine Pandas Polars Lazy (Rust) ✨
Configuration argparse Draccus (dataclass-to-CLI) ✨
Distributional RL ❌ HL-Gauss ✨
Ensemble Method ❌ torch.vmap ✨
Transformer Attention Standard SDPA/FlashAttn ✨
Positional Encoding Learned RoPE ✨
Policy Stability exp(log_std) SigmoidRangeStd/BiasedSoftplus ✨
Parallel Evaluation ❌ ProcessPool + Round-robin ✨
Visualization ❌ Production Dashboards ✨

πŸ“Š Benchmarking Results

We evaluate all agents using rigorous statistical methods across multiple delivery periods with round-robin testing and multi-seed evaluation. The evaluation protocol follows RLiable best practices for statistically reliable algorithm comparison.

Benchmark violin plots

Score Distribution Analysis
Violin plots showing performance distributions across agents and seeds.

Benchmark bar charts

Mean Performance Comparison
Aggregated performance metrics with confidence intervals.

RLiable metrics

RLiable Statistical Metrics
Performance profiles and aggregate metrics following RLiable best practices.


πŸ“ˆ Interactive Dashboards & Gallery

Beyond raw performance metrics, rlbidder helps you understand why agents behave the way they do. Production-grade interactive dashboards summarize policy behavior, campaign health, and auction dynamics for both research insights and production monitoring.

Auction market analysis

Auction market analysis
Market concentration, volatility, and competitiveness.

Campaign analysis for CQL

Campaign analysis (CQL)
Segment-level delivery quality and conversion outcomes.

Budget pacing for CQL

Budget pacing (CQL)
Daily spend pacing and CPA stabilization diagnostics.

Auction metrics scatterplots

Auction metrics scatterplots
Spend, conversion, ROI, and win-rate trade-offs.


πŸš€ Getting Started

Installation

Prerequisites

  • Python 3.11 or newer
  • PyTorch 2.6 or newer (follow PyTorch install guide)
  • GPU with 8GB+ vRAM recommended for training

Local Development

git clone https://github.com/zuoxingdong/rlbidder.git
cd rlbidder
pip install -e .

Quickstart

Follow the steps below to reproduce the full offline RL workflow on processed campaign data.

Step 1: Data Preparation

# Download sample competition data (periods 7-8 and trajectory 1)
bash scripts/download_raw_data.sh -p 7-8,traj1 -d data/raw

# Convert raw CSV to Parquet (faster I/O with Polars)
python scripts/convert_csv_to_parquet.py --raw_data_dir=data/raw

# Build evaluation-period parquet files
python scripts/build_eval_dataset.py --data_dir=data

# Create training transitions (trajectory format for offline RL)
python scripts/build_transition_dataset.py --data_dir=data --mode=trajectory

# Fit scalers for state, action, and reward normalization
python scripts/scale_transitions.py --data_dir=data --output_dir=scaled_transitions

# Generate Decision Transformer trajectories with return-to-go
python scripts/build_dt_dataset.py \
  --build.data_dir=data \
  --build.reward_type=reward_dense \
  --build.use_scaled_reward=true

What you'll have: Preprocessed datasets in data/processed/ and fitted scalers in data/scaled_transitions/ ready for training.

Step 2: Train Agents

# Train IQL (Implicit Q-Learning) - value-based offline RL
python examples/train_iql.py \
  --model_cfg.lr_actor 3e-4 \
  --model_cfg.lr_critic 3e-4 \
  --model_cfg.num_q_models 5 \
  --model_cfg.bc_alpha 0.01 \
  --train_cfg.enable_aim_logger=False

# Train DT (Decision Transformer) - sequence modeling for RL
python examples/train_dt.py \
  --model_cfg.embedding_dim 512 \
  --model_cfg.num_layers 6 \
  --model_cfg.lr 1e-4 \
  --model_cfg.rtg_scale 98 \
  --model_cfg.target_rtg 2.0 \
  --train_cfg.enable_aim_logger=False

What you'll have: Trained model checkpoints in examples/checkpoints/ with scalers and hyperparameters.

πŸ’‘ Configuration powered by draccus: All training scripts use type-safe dataclass configs with automatic CLI generation. Override any nested config with dot-notation (e.g., --model_cfg.lr 1e-4) or pass config files directly.

πŸ’‘ Track experiments with Aim: All training scripts automatically log metrics, hyperparameters, and model artifacts to Aim (a local experiment tracker). To use Aim, first initialize your project with:

aim init

Then launch the Aim UI to visualize training progress:

aim up --port 43800

Then open http://localhost:43800 in your browser to explore training curves, compare runs, and analyze hyperparameter configurations.

Step 3: Evaluate in Simulated Auctions

# Evaluate IQL agent with parallel multi-seed evaluation
python examples/evaluate_agents.py \
  --evaluation.data_dir=data \
  --evaluation.evaluator_type=OnlineCampaignEvaluator \
  --evaluation.delivery_period_indices=[7,8] \
  --evaluation.num_seeds=5 \
  --evaluation.num_workers=8 \
  --evaluation.output_dir=examples/eval \
  --agent.agent_class=IQLBiddingAgent \
  --agent.model_dir=examples/checkpoints/iql \
  --agent.checkpoint_file=best.ckpt

What you'll have: Evaluation reports, campaign summaries, and auction histories in examples/eval/ ready for visualization.

Next steps: Generate dashboards with examples/performance_visualization.ipynb or explore the evaluation results with Polars DataFrames.


πŸ“¦ Module Guide

Each module handles a specific aspect of the RL bidding pipeline:

Module Description Key Classes/Functions
πŸ“š rlbidder.agents Offline RL agents and control baselines IQLModel, CQLModel, DTModel, GAVEModel, BudgetPacerBiddingAgent
πŸ”§ rlbidder.data Data processing, scalers, and datasets OfflineDataModule, TrajDataset, SymlogTransformer, WinsorizerTransformer
πŸͺ rlbidder.envs Auction simulation and value sampling OnlineAuctionEnv, ValueSampler, sample_conversions
🎯 rlbidder.evaluation Multi-agent evaluation and metrics ParallelOnlineCampaignEvaluator, OnlineCampaignEvaluator
🧠 rlbidder.models Neural network building blocks StochasticActor, EnsembledQNetwork, NormalHead, HLGaussLoss
πŸ“Š rlbidder.viz Interactive dashboards and analytics create_campaign_dashboard, create_market_dashboard, plot_rliable_metrics
πŸ› οΈ rlbidder.utils Utilities and helpers set_seed, log_distribution, regression_report

πŸ—οΈ Architecture

The library follows a modular design with clear separation of concerns. Data flows from raw logs through preprocessing, training, and evaluation to final visualization:

flowchart TD
    subgraph Data["πŸ“¦ Data Pipeline"]
        direction TB
        raw["Raw Campaign Data<br/><i>CSV/Parquet logs</i>"]
        scripts["Build Scripts<br/>convert β€’ build_eval<br/>build_transition β€’ scale"]
        artifacts["πŸ“ Preprocessed Artifacts<br/>processed/ β€’ scaled_transitions/<br/><i>Parquet + Scalers</i>"]
        
        raw -->|transform| scripts
        scripts -->|generate| artifacts
    end

    subgraph Core["βš™οΈ Core Library Modules"]
        direction TB
        data_mod["<b>rlbidder.data</b><br/>OfflineDataModule<br/>TrajDataset β€’ ReplayBuffer<br/>πŸ”§ <i>Handles batching & scaling</i>"]
        models["<b>rlbidder.models</b><br/>StochasticActor β€’ EnsembledQNetwork<br/>ValueNetwork β€’ Losses β€’ Optimizers<br/>🧠 <i>Agent building blocks</i>"]
        agents["<b>rlbidder.agents</b><br/>IQLModel β€’ CQLModel β€’ DTModel<br/>πŸ“š <i>LightningModule implementations</i>"]
        
        agents -->|composes| models
    end

    subgraph Training["πŸ”₯ Training Pipeline"]
        direction TB
        train["<b>examples/train_iql.py</b><br/>πŸŽ›οΈ Config + CLI<br/><i>Orchestration script</i>"]
        trainer["⚑ Lightning Trainer<br/>fit() β€’ validate()<br/><i>Multi-GPU support</i>"]
        ckpt["πŸ’Ύ Model Checkpoints<br/>best.ckpt β€’ last.ckpt<br/><i>+ scalers + hparams</i>"]
        
        train -->|instantiates| data_mod
        train -->|instantiates| agents
        train -->|launches| trainer
        trainer -->|saves| ckpt
    end

    subgraph Eval["🎯 Online Evaluation"]
        direction TB
        evaluator["<b>rlbidder.evaluation</b><br/>OnlineCampaignEvaluator<br/>ParallelEvaluator<br/>πŸ”„ <i>Multi-seed, round-robin</i>"]
        env["<b>rlbidder.envs</b><br/>Auction Simulator<br/>πŸͺ <i>Multi-agent market</i>"]
        results["πŸ“ˆ Evaluation Results<br/>Campaign Reports β€’ Agent Summaries<br/>Auction Histories<br/><i>Polars DataFrames</i>"]
        
        evaluator -->|simulates| env
        env -->|produces| results
    end

    subgraph Viz["πŸ“Š Visualization & Analysis"]
        direction TB
        viz["<b>rlbidder.viz</b><br/>Plotly Dashboards<br/>Market Metrics<br/>🎨 <i>Interactive HTML</i>"]
        plots["πŸ“‰ Production Dashboards<br/>Campaign Health β€’ Market Structure<br/>Budget Pacing β€’ Scatter Analysis"]
        
        viz -->|renders| plots
    end

    artifacts ==>|loads| data_mod
    artifacts -.->|eval data| evaluator
    ckpt ==>|load_from_checkpoint| evaluator
    results ==>|consumes| viz

    classDef dataStyle fill:#1565c0,stroke:#0d47a1,stroke-width:3px,color:#fff,font-weight:bold
    classDef coreStyle fill:#ef6c00,stroke:#e65100,stroke-width:3px,color:#fff,font-weight:bold
    classDef trainStyle fill:#6a1b9a,stroke:#4a148c,stroke-width:3px,color:#fff,font-weight:bold
    classDef evalStyle fill:#2e7d32,stroke:#1b5e20,stroke-width:3px,color:#fff,font-weight:bold
    classDef vizStyle fill:#c2185b,stroke:#880e4f,stroke-width:3px,color:#fff,font-weight:bold
    
    class Data,raw,scripts,artifacts dataStyle
    class Core,data_mod,models,agents coreStyle
    class Training,train,trainer,ckpt trainStyle
    class Eval,evaluator,env,results evalStyle
    class Viz,viz,plots vizStyle
Loading

Design Principles:

  • πŸ”Œ Modular - Each component is independently usable and testable
  • ⚑ Scalable - Polars + Lightning enable massive datasets and efficient training
  • πŸ”„ Reproducible - Deterministic seeding, configuration management, and evaluation
  • πŸš€ Production-ready - Type hints, error handling, logging, and monitoring built-in

🀝 Contributing

  • 🌟 Star the repo if you find it useful
  • πŸ”€ Fork and submit PRs for bug fixes or new features
  • πŸ“ Improve documentation and add examples
  • πŸ§ͺ Add tests for new functionality

🌟 Acknowledgments

rlbidder builds upon ideas from:


πŸ“ Citation

If you use rlbidder in your work, please cite it using the BibTeX entry below.

@misc{zuo2025rlbidder,
  author = {Zuo, Xingdong},
  title = {RLBidder: Reinforcement learning auto-bidding library for research and production},
  year = {2025},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/zuoxingdong/rlbidder}}
}

License

MIT License. See LICENSE.