Skip to content

Configuration

Configure Raceway for your deployment needs with TOML configuration files.

Configuration File

Create raceway.toml in your project root:

toml
[server]
host = "0.0.0.0"
port = 8080

[storage]
backend = "memory"  # or "postgres" or "supabase"

[server]
auth_enabled = false
cors_enabled = true

See raceway.toml.example in the repository for a complete annotated configuration file.

Server Configuration

Network Binding

toml
[server]
host = "127.0.0.1"  # localhost only (default, secure)
port = 8080

Production:

toml
[server]
host = "0.0.0.0"    # all interfaces
port = 8080

TLS/HTTPS Support

Native TLS support is not yet implemented. Use a reverse proxy (nginx, Caddy, Traefik) for HTTPS termination. See the Security Guide for details.

Want to help? This is a priority issue for contributors.

Storage Configuration

In-Memory (Default)

toml
[storage]
backend = "memory"

Pros:

  • Fast
  • No setup
  • No dependencies

Cons:

  • No persistence
  • Limited by RAM

Use for: Development, testing

PostgreSQL

toml
[storage]
backend = "postgres"

[storage.postgres]
connection_string = "postgresql://user:pass@localhost/raceway"
max_connections = 10
min_connections = 2
connection_timeout_seconds = 30
auto_migrate = true

Pros:

  • Persistent
  • Scalable
  • Queryable

Cons:

  • Requires database
  • Slightly slower

Use for: Production

Supabase

toml
[storage]
backend = "supabase"

[storage.postgres]
connection_string = "postgresql://postgres.xxx:pass@aws-0-us-east-1.pooler.supabase.com:5432/postgres"
max_connections = 10

Supabase uses the same PostgreSQL configuration as the postgres backend.

Authentication

Enable API Key Authentication

toml
[server]
auth_enabled = true
api_keys = ["your-secret-key-here"]

SDK configuration:

typescript
const client = new RacewayClient({
  serverUrl: 'http://localhost:8080',
  apiKey: 'your-secret-key-here'
});
python
client = RacewayClient(
    server_url='http://localhost:8080',
    api_key='your-secret-key-here'
)

HTTP requests:

bash
curl -H "Authorization: Bearer your-secret-key-here" \
  http://localhost:8080/api/traces

Multiple API Keys

toml
[server]
auth_enabled = true
api_keys = [
  "key-for-service-a",
  "key-for-service-b",
  "key-for-admin"
]

All keys have equal permissions. Use different keys per service for easier revocation.

CORS Configuration

toml
[server]
cors_enabled = true
cors_origins = [
  "http://localhost:3000",
  "https://app.example.com"
]

Development (allow all):

toml
[server]
cors_enabled = true
cors_origins = ["*"]

Disable CORS:

toml
[server]
cors_enabled = false

Rate Limiting

toml
[server]
rate_limit_enabled = true
rate_limit_rpm = 1000  # requests per minute

Applies globally to all endpoints. Clients exceeding the limit receive 429 Too Many Requests.

Event Processing

Raceway uses a batched event processing pipeline for optimal database performance. Events from SDKs are buffered in memory, then flushed to storage in batches.

Configuration

toml
[engine]
buffer_size = 10000        # Event queue capacity
batch_size = 100           # Events per batch
flush_interval_ms = 100    # Batch flush interval (milliseconds)

How It Works

  1. SDKs send events → Events arrive at /events endpoint
  2. Buffer in memory → Events queue in a channel (up to buffer_size)
  3. Batch collection → Engine collects up to batch_size events
  4. Flush trigger → Batch flushes when:
    • batch_size events collected, OR
    • flush_interval_ms milliseconds elapsed
  5. Bulk database insert → All events in batch written in a single transaction

Performance impact: Batch processing reduces database operations by ~100-200x compared to individual inserts.

Tuning Guidelines

buffer_size (Default: 10000)

The in-memory event queue capacity.

Increase when:

  • High event volume (>1000 events/sec)
  • Burst traffic patterns
  • Database temporarily slow/unavailable

Decrease when:

  • Memory constrained
  • Low event volume
  • Want faster shutdown (fewer buffered events to flush)

Memory impact: ~1-2 KB per event. 10000 events ≈ 10-20 MB.

batch_size (Default: 100)

Number of events per database transaction.

Increase (200-500) when:

  • High sustained throughput
  • Database can handle large transactions
  • Optimizing for maximum ingestion rate
  • Using powerful database (4+ cores, SSD)

Decrease (10-50) when:

  • Want lower latency (events visible sooner)
  • Database connection limited
  • Small/embedded database
  • Memory constrained

Trade-off: Larger batches = higher throughput but slightly higher latency.

flush_interval_ms (Default: 100)

Maximum time to wait before flushing a partial batch.

Decrease (50-100ms) when:

  • Want near real-time visibility
  • Low event volume (batches rarely fill)
  • Debugging/development

Increase (200-1000ms) when:

  • Optimizing for throughput over latency
  • Very high event volume
  • Reducing database load

Trade-off: Lower interval = more real-time but more frequent database writes.

Performance Scenarios

High Volume Production (1000+ events/sec)

toml
[engine]
buffer_size = 50000
batch_size = 500
flush_interval_ms = 200

[storage.postgres]
max_connections = 20
  • Large buffer handles bursts
  • Large batches maximize throughput
  • Higher flush interval reduces DB load
  • More connections for concurrent operations

Low Latency Development

toml
[engine]
buffer_size = 1000
batch_size = 10
flush_interval_ms = 50
  • Small buffer (events visible quickly)
  • Small batches (low latency)
  • Fast flush (near real-time)

Memory Constrained

toml
[engine]
buffer_size = 1000
batch_size = 50
flush_interval_ms = 100

[storage.postgres]
max_connections = 5
  • Minimal buffer size
  • Moderate batch size
  • Fewer database connections

Monitoring and Tuning

Signs buffer is too small:

  • Logs show "event buffer full" warnings
  • SDKs report failed event submissions
  • High event loss under load

Signs batch is too large:

  • Database transaction timeouts
  • High memory usage
  • Long flush times in logs

Signs flush interval is too high:

  • Events appear in UI with noticeable delay
  • Traces incomplete until flush occurs

Optimal settings:

  • Buffer rarely fills (check logs)
  • Batches typically full (efficient DB usage)
  • Events visible within acceptable latency

Analysis Settings

Race Detection

toml
[race_detection]
enabled = true

Analyzes conflicting concurrent accesses to shared state.

Anomaly Detection

toml
[anomaly_detection]
enabled = true

Detects performance anomalies and outliers.

Distributed Tracing

toml
[distributed_tracing]
enabled = true

Merges traces across service boundaries using W3C Trace Context.

Logging

toml
[logging]
level = "info"          # trace, debug, info, warn, error
include_modules = false # Include Rust module names in logs

Log levels:

  • trace: Very verbose, includes all internal operations
  • debug: Detailed information for debugging
  • info: General informational messages (default)
  • warn: Warning messages
  • error: Error messages only

Development Settings

toml
[development]
cors_allow_all = false

Development-only toggles. Do not use in production.

Example Configurations

Development

toml
[server]
host = "127.0.0.1"
port = 8080
auth_enabled = false
cors_enabled = true
cors_origins = ["*"]

[storage]
backend = "memory"

[logging]
level = "debug"

Production

toml
[server]
host = "0.0.0.0"
port = 8080
auth_enabled = true
api_keys = ["${RACEWAY_API_KEY}"]  # From environment
cors_enabled = true
cors_origins = ["https://app.company.com"]
rate_limit_enabled = true
rate_limit_rpm = 10000

[storage]
backend = "postgres"

[storage.postgres]
connection_string = "postgresql://raceway:${DB_PASSWORD}@db:5432/raceway"
max_connections = 20
auto_migrate = true

[engine]
buffer_size = 50000
batch_size = 500

[race_detection]
enabled = true

[anomaly_detection]
enabled = true

[distributed_tracing]
enabled = true

[logging]
level = "info"
include_modules = false

Configuration Reference

[server]

FieldTypeDefaultDescription
hoststring"127.0.0.1"Network interface to bind
portu168080TCP port to listen on
verboseboolfalseEnable verbose output
cors_enabledbooltrueEnable CORS middleware
cors_originsarray["*"]Allowed CORS origins
rate_limit_enabledboolfalseEnable rate limiting
rate_limit_rpmu321000Requests per minute limit
auth_enabledboolfalseRequire API key authentication
api_keysarray[]Valid API keys

[storage]

FieldTypeDefaultDescription
backendstring"memory"Storage backend: memory, postgres, supabase

[storage.postgres]

FieldTypeDefaultDescription
connection_stringstringnonePostgreSQL connection URL
max_connectionsu3210Maximum pool size
min_connectionsu322Minimum pool size
connection_timeout_secondsu3230Connection timeout
auto_migratebooltrueAuto-run migrations on startup

[engine]

FieldTypeDefaultDescription
buffer_sizeusize10000Event buffer capacity
batch_sizeusize100Events per batch
flush_interval_msu64100Batch flush interval

[race_detection]

FieldTypeDefaultDescription
enabledbooltrueEnable race detection

[anomaly_detection]

FieldTypeDefaultDescription
enabledbooltrueEnable anomaly detection

[distributed_tracing]

FieldTypeDefaultDescription
enabledboolfalseEnable distributed tracing

[logging]

FieldTypeDefaultDescription
levelstring"info"Log level
include_modulesboolfalseInclude module names

[development]

FieldTypeDefaultDescription
cors_allow_allboolfalseDevelopment CORS override

Next Steps

Released under the MIT License.