Webhook Throughput Quick Start Guide¶
10K+ Webhooks/Sec with Redis Queue & Worker Pool¶
Target: 10,000+ webhooks/second Architecture: Redis queue + async worker pool Latency: <5ms average, <25ms P99
Quick Setup (5 Minutes)¶
1. Install Redis¶
# macOS
brew install redis
redis-server
# Ubuntu/Debian
sudo apt-get install redis-server
sudo systemctl start redis
# Docker
docker run -d -p 6379:6379 redis:7-alpine
2. Add Dependencies¶
[dependencies]
heliosdb-webhooks = "0.6"
tokio = { version = "1.40", features = ["full"] }
redis = { version = "0.27", features = ["tokio-comp"] }
3. Basic Usage¶
use heliosdb_webhooks::{
QueueConfig, WebhookQueue, WorkerPool, WorkerPoolConfig,
WebhookProcessor, WebhookRequest, WebhookResponse,
};
use std::sync::Arc;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// 1. Create Redis queue
let queue_config = QueueConfig {
redis_url: "redis://127.0.0.1:6379".to_string(),
max_workers: 100,
batch_size: 100,
enable_priority: true,
..Default::default()
};
let queue = Arc::new(WebhookQueue::new(queue_config).await?);
// 2. Create webhook processor
let processor = Arc::new(MyWebhookProcessor::new());
// 3. Create worker pool
let pool_config = WorkerPoolConfig {
num_workers: 100,
batch_size: 100,
enable_batch: true,
..Default::default()
};
let pool = WorkerPool::new(queue.clone(), processor, pool_config);
// 4. Start processing
pool.start().await?;
// 5. Enqueue webhooks
let request = create_webhook_request();
queue.enqueue(request, 10).await?;
// 6. Monitor metrics
let stats = pool.stats().await;
println!("Throughput: {} req/s", stats.current_throughput);
Ok(())
}
// Implement your custom processor
struct MyWebhookProcessor;
#[async_trait::async_trait]
impl WebhookProcessor for MyWebhookProcessor {
async fn process(&self, webhook: QueuedWebhook) -> Result<WebhookResponse> {
// Your processing logic here
Ok(WebhookResponse {
status_code: 200,
headers: Default::default(),
body: serde_json::json!({"status": "ok"}),
processing_time_ms: 1,
})
}
}
Performance Tuning¶
Configuration Guide¶
| Workers | Batch Size | Throughput | Use Case |
|---|---|---|---|
| 10 | 1 | ~1K/s | Development |
| 50 | 50 | ~5K/s | Small production |
| 100 | 100 | ~10K/s | Production (recommended) |
| 200 | 500 | ~50K/s | High throughput |
| 500 | 1000 | ~150K/s | Extreme scale |
Redis Optimization¶
# redis.conf
maxmemory 4gb
maxmemory-policy allkeys-lru
save "" # Disable RDB for performance
appendonly no # Disable AOF for max throughput
Worker Pool Tuning¶
let pool_config = WorkerPoolConfig {
num_workers: 100, // Scale based on CPU cores
batch_size: 100, // Higher = more throughput, higher latency
enable_batch: true, // ALWAYS enable for production
poll_interval: Duration::from_micros(100), // Lower = lower latency
auto_scale: true, // Enable for variable loads
min_workers: 50,
max_workers: 200,
..Default::default()
};
Batch Operations¶
Batch Enqueue (1000x faster)¶
// Single enqueue: ~100 ops/sec
for i in 0..1000 {
queue.enqueue(request.clone(), 10).await?;
}
// Batch enqueue: ~100,000 webhooks/sec
let batch: Vec<_> = (0..1000)
.map(|_| (request.clone(), 10))
.collect();
queue.enqueue_batch(batch).await?;
Batch Processing¶
impl WebhookProcessor for MyProcessor {
// Process batch (more efficient)
async fn process_batch(&self, webhooks: Vec<QueuedWebhook>)
-> Result<Vec<WebhookResponse>>
{
// Single database transaction for all webhooks
let responses = database_batch_insert(webhooks).await?;
Ok(responses)
}
}
Monitoring & Metrics¶
Real-Time Metrics¶
// Worker pool stats
let stats = pool.stats().await;
println!("Active Workers: {}", stats.active_workers);
println!("Throughput: {:.0} req/s", stats.current_throughput);
println!("Avg Latency: {:.2}ms", stats.avg_latency_ms);
println!("Success Rate: {:.2}%", stats.success_rate());
// Queue metrics
let metrics = queue.metrics().await?;
println!("Queue Size: {}", metrics.queue_size);
println!("Processing: {}", metrics.processing_size);
println!("DLQ Size: {}", metrics.dlq_size);
println!("Total Completed: {}", metrics.total_completed);
Alerting Thresholds¶
// Alert if queue is backing up
if metrics.queue_size > 5000 {
alert("Queue depth high - scale up workers");
}
// Alert if DLQ is growing
if metrics.dlq_size > 100 {
alert("High failure rate - investigate");
}
// Alert if latency is high
if stats.avg_latency_ms > 10.0 {
alert("High latency - check Redis/processing");
}
Production Deployment¶
High Availability Setup¶
# docker-compose.yml
version: '3.8'
services:
redis-master:
image: redis:7-alpine
ports:
- "6379:6379"
command: redis-server --maxmemory 4gb --maxmemory-policy allkeys-lru
volumes:
- redis-data:/data
redis-replica:
image: redis:7-alpine
ports:
- "6380:6379"
command: redis-server --replicaof redis-master 6379
webhook-workers:
image: heliosdb-webhooks:latest
deploy:
replicas: 3
environment:
- REDIS_URL=redis://redis-master:6379
- NUM_WORKERS=100
- BATCH_SIZE=100
volumes:
redis-data:
Kubernetes Deployment¶
apiVersion: apps/v1
kind: Deployment
metadata:
name: webhook-workers
spec:
replicas: 3
template:
spec:
containers:
- name: webhook-worker
image: heliosdb-webhooks:latest
env:
- name: REDIS_URL
value: "redis://redis-service:6379"
- name: NUM_WORKERS
value: "100"
- name: BATCH_SIZE
value: "100"
resources:
requests:
memory: "2Gi"
cpu: "2000m"
limits:
memory: "4Gi"
cpu: "4000m"
Benchmarking¶
Run Benchmarks¶
# Install Criterion
cargo install cargo-criterion
# Run webhook benchmarks
cd heliosdb-webhooks
cargo bench --bench throughput_benchmark
# Run specific benchmark
cargo bench --bench throughput_benchmark -- sustained_throughput
Expected Results¶
single_webhook/enqueue time: [8.2 ms 8.5 ms 8.9 ms]
thrpt: [112 elem/s 117 elem/s 121 elem/s]
batch_operations/batch_enqueue/1000
time: [9.8 ms 10.2 ms 10.7 ms]
thrpt: [93.4K elem/s 98.0K elem/s 102K elem/s]
sustained_throughput/10k_per_sec_target
time: [58.1 s 60.0 s 62.1 s]
thrpt: [10.2K req/s 10.5K req/s 10.8K req/s]
Troubleshooting¶
Issue: Low Throughput (<1K/s)¶
Solution:
1. Enable batch mode: enable_batch: true
2. Increase batch size: batch_size: 100
3. Increase workers: num_workers: 100
Issue: High Latency (>50ms)¶
Solution:
1. Reduce poll interval: poll_interval: Duration::from_micros(50)
2. Check Redis latency: redis-cli --latency
3. Use localhost Redis (avoid network latency)
Issue: DLQ Growing¶
Solution:
1. Check webhook processing logic for errors
2. Increase retry count: max_retries: 5
3. Add error logging in processor
Issue: Memory Usage High¶
Solution:
1. Reduce batch size: batch_size: 50
2. Configure Redis maxmemory
3. Reduce worker count if CPU is idle
Next Steps¶
- Implement Custom Processor: Extend
WebhookProcessortrait - Add Monitoring: Integrate with Prometheus/Grafana
- Configure Alerts: Set up alerting for queue depth, DLQ, latency
- Load Test: Run sustained load tests to find limits
- Scale Horizontally: Add more worker instances
References¶
Last Updated: November 14, 2025 Version: v0.6.0 Status: Production Ready ✓