Configuration
Configure Raceway for your deployment needs with TOML configuration files.
Configuration File
Create raceway.toml in your project root:
[server]
host = "0.0.0.0"
port = 8080
[storage]
backend = "memory" # or "postgres" or "supabase"
[server]
auth_enabled = false
cors_enabled = trueSee raceway.toml.example in the repository for a complete annotated configuration file.
Server Configuration
Network Binding
[server]
host = "127.0.0.1" # localhost only (default, secure)
port = 8080Production:
[server]
host = "0.0.0.0" # all interfaces
port = 8080TLS/HTTPS Support
Native TLS support is not yet implemented. Use a reverse proxy (nginx, Caddy, Traefik) for HTTPS termination. See the Security Guide for details.
Want to help? This is a priority issue for contributors.
Storage Configuration
In-Memory (Default)
[storage]
backend = "memory"Pros:
- Fast
- No setup
- No dependencies
Cons:
- No persistence
- Limited by RAM
Use for: Development, testing
PostgreSQL
[storage]
backend = "postgres"
[storage.postgres]
connection_string = "postgresql://user:pass@localhost/raceway"
max_connections = 10
min_connections = 2
connection_timeout_seconds = 30
auto_migrate = truePros:
- Persistent
- Scalable
- Queryable
Cons:
- Requires database
- Slightly slower
Use for: Production
Supabase
[storage]
backend = "supabase"
[storage.postgres]
connection_string = "postgresql://postgres.xxx:pass@aws-0-us-east-1.pooler.supabase.com:5432/postgres"
max_connections = 10Supabase uses the same PostgreSQL configuration as the postgres backend.
Authentication
Enable API Key Authentication
[server]
auth_enabled = true
api_keys = ["your-secret-key-here"]SDK configuration:
const client = new RacewayClient({
serverUrl: 'http://localhost:8080',
apiKey: 'your-secret-key-here'
});client = RacewayClient(
server_url='http://localhost:8080',
api_key='your-secret-key-here'
)HTTP requests:
curl -H "Authorization: Bearer your-secret-key-here" \
http://localhost:8080/api/tracesMultiple API Keys
[server]
auth_enabled = true
api_keys = [
"key-for-service-a",
"key-for-service-b",
"key-for-admin"
]All keys have equal permissions. Use different keys per service for easier revocation.
CORS Configuration
[server]
cors_enabled = true
cors_origins = [
"http://localhost:3000",
"https://app.example.com"
]Development (allow all):
[server]
cors_enabled = true
cors_origins = ["*"]Disable CORS:
[server]
cors_enabled = falseRate Limiting
[server]
rate_limit_enabled = true
rate_limit_rpm = 1000 # requests per minuteApplies globally to all endpoints. Clients exceeding the limit receive 429 Too Many Requests.
Event Processing
Raceway uses a batched event processing pipeline for optimal database performance. Events from SDKs are buffered in memory, then flushed to storage in batches.
Configuration
[engine]
buffer_size = 10000 # Event queue capacity
batch_size = 100 # Events per batch
flush_interval_ms = 100 # Batch flush interval (milliseconds)How It Works
- SDKs send events → Events arrive at
/eventsendpoint - Buffer in memory → Events queue in a channel (up to
buffer_size) - Batch collection → Engine collects up to
batch_sizeevents - Flush trigger → Batch flushes when:
batch_sizeevents collected, ORflush_interval_msmilliseconds elapsed
- Bulk database insert → All events in batch written in a single transaction
Performance impact: Batch processing reduces database operations by ~100-200x compared to individual inserts.
Tuning Guidelines
buffer_size (Default: 10000)
The in-memory event queue capacity.
Increase when:
- High event volume (>1000 events/sec)
- Burst traffic patterns
- Database temporarily slow/unavailable
Decrease when:
- Memory constrained
- Low event volume
- Want faster shutdown (fewer buffered events to flush)
Memory impact: ~1-2 KB per event. 10000 events ≈ 10-20 MB.
batch_size (Default: 100)
Number of events per database transaction.
Increase (200-500) when:
- High sustained throughput
- Database can handle large transactions
- Optimizing for maximum ingestion rate
- Using powerful database (4+ cores, SSD)
Decrease (10-50) when:
- Want lower latency (events visible sooner)
- Database connection limited
- Small/embedded database
- Memory constrained
Trade-off: Larger batches = higher throughput but slightly higher latency.
flush_interval_ms (Default: 100)
Maximum time to wait before flushing a partial batch.
Decrease (50-100ms) when:
- Want near real-time visibility
- Low event volume (batches rarely fill)
- Debugging/development
Increase (200-1000ms) when:
- Optimizing for throughput over latency
- Very high event volume
- Reducing database load
Trade-off: Lower interval = more real-time but more frequent database writes.
Performance Scenarios
High Volume Production (1000+ events/sec)
[engine]
buffer_size = 50000
batch_size = 500
flush_interval_ms = 200
[storage.postgres]
max_connections = 20- Large buffer handles bursts
- Large batches maximize throughput
- Higher flush interval reduces DB load
- More connections for concurrent operations
Low Latency Development
[engine]
buffer_size = 1000
batch_size = 10
flush_interval_ms = 50- Small buffer (events visible quickly)
- Small batches (low latency)
- Fast flush (near real-time)
Memory Constrained
[engine]
buffer_size = 1000
batch_size = 50
flush_interval_ms = 100
[storage.postgres]
max_connections = 5- Minimal buffer size
- Moderate batch size
- Fewer database connections
Monitoring and Tuning
Signs buffer is too small:
- Logs show "event buffer full" warnings
- SDKs report failed event submissions
- High event loss under load
Signs batch is too large:
- Database transaction timeouts
- High memory usage
- Long flush times in logs
Signs flush interval is too high:
- Events appear in UI with noticeable delay
- Traces incomplete until flush occurs
Optimal settings:
- Buffer rarely fills (check logs)
- Batches typically full (efficient DB usage)
- Events visible within acceptable latency
Analysis Settings
Race Detection
[race_detection]
enabled = trueAnalyzes conflicting concurrent accesses to shared state.
Anomaly Detection
[anomaly_detection]
enabled = trueDetects performance anomalies and outliers.
Distributed Tracing
[distributed_tracing]
enabled = trueMerges traces across service boundaries using W3C Trace Context.
Logging
[logging]
level = "info" # trace, debug, info, warn, error
include_modules = false # Include Rust module names in logsLog levels:
trace: Very verbose, includes all internal operationsdebug: Detailed information for debugginginfo: General informational messages (default)warn: Warning messageserror: Error messages only
Development Settings
[development]
cors_allow_all = falseDevelopment-only toggles. Do not use in production.
Example Configurations
Development
[server]
host = "127.0.0.1"
port = 8080
auth_enabled = false
cors_enabled = true
cors_origins = ["*"]
[storage]
backend = "memory"
[logging]
level = "debug"Production
[server]
host = "0.0.0.0"
port = 8080
auth_enabled = true
api_keys = ["${RACEWAY_API_KEY}"] # From environment
cors_enabled = true
cors_origins = ["https://app.company.com"]
rate_limit_enabled = true
rate_limit_rpm = 10000
[storage]
backend = "postgres"
[storage.postgres]
connection_string = "postgresql://raceway:${DB_PASSWORD}@db:5432/raceway"
max_connections = 20
auto_migrate = true
[engine]
buffer_size = 50000
batch_size = 500
[race_detection]
enabled = true
[anomaly_detection]
enabled = true
[distributed_tracing]
enabled = true
[logging]
level = "info"
include_modules = falseConfiguration Reference
[server]
| Field | Type | Default | Description |
|---|---|---|---|
host | string | "127.0.0.1" | Network interface to bind |
port | u16 | 8080 | TCP port to listen on |
verbose | bool | false | Enable verbose output |
cors_enabled | bool | true | Enable CORS middleware |
cors_origins | array | ["*"] | Allowed CORS origins |
rate_limit_enabled | bool | false | Enable rate limiting |
rate_limit_rpm | u32 | 1000 | Requests per minute limit |
auth_enabled | bool | false | Require API key authentication |
api_keys | array | [] | Valid API keys |
[storage]
| Field | Type | Default | Description |
|---|---|---|---|
backend | string | "memory" | Storage backend: memory, postgres, supabase |
[storage.postgres]
| Field | Type | Default | Description |
|---|---|---|---|
connection_string | string | none | PostgreSQL connection URL |
max_connections | u32 | 10 | Maximum pool size |
min_connections | u32 | 2 | Minimum pool size |
connection_timeout_seconds | u32 | 30 | Connection timeout |
auto_migrate | bool | true | Auto-run migrations on startup |
[engine]
| Field | Type | Default | Description |
|---|---|---|---|
buffer_size | usize | 10000 | Event buffer capacity |
batch_size | usize | 100 | Events per batch |
flush_interval_ms | u64 | 100 | Batch flush interval |
[race_detection]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | true | Enable race detection |
[anomaly_detection]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | true | Enable anomaly detection |
[distributed_tracing]
| Field | Type | Default | Description |
|---|---|---|---|
enabled | bool | false | Enable distributed tracing |
[logging]
| Field | Type | Default | Description |
|---|---|---|---|
level | string | "info" | Log level |
include_modules | bool | false | Include module names |
[development]
| Field | Type | Default | Description |
|---|---|---|---|
cors_allow_all | bool | false | Development CORS override |
Next Steps
- Storage Options - Choose storage backend
- Security - Secure your deployment
- Getting Started - Initial setup
