System Overview
Lasso RPC is an Elixir/OTP application that provides blockchain RPC provider orchestration with intelligent routing, circuit breaker protection, and real-time observability.Core Design Principles
Geo-Distributed Independence- Each Lasso node operates autonomously with complete, isolated supervision trees
- Routing decisions based on local latency measurements only
- No cluster coordination in the request hot path
- Single nodes work standalone without clustering
- Independent routing configurations per profile
- Isolated metrics and circuit breakers per
(profile, chain)pair - Shared provider infrastructure for efficiency
- Unified pipeline routes across HTTP and WebSocket
- Real-time performance-based transport selection
- Method-specific benchmarking per transport
Geo-Distributed Proxy Design
Lasso is designed for global deployments where each node serves traffic from its region while optionally sharing observability data.Regional Latency Awareness
Provider performance varies significantly by geography. Lasso’s passive benchmarking reveals which providers are fastest from each region:- Applications connect to their nearest Lasso node
- Each node independently measures provider latency
- Routing optimizes for regional performance
Observability-First Clustering
When enabled, clustering aggregates metrics for operational visibility without impacting routing:- Unified dashboard view across regions
- Per-region provider performance comparison
- Topology monitoring (node health)
- No routing impact - clustering is purely observational
OTP Supervision Architecture
The supervision tree provides fault tolerance through hierarchical process supervision:Key Components
Provider Instance Management
Lasso.Providers.Catalog- Pure module (not GenServer) for O(1) provider lookups
- Maps profiles to shared provider instances
- Builds ETS catalog from ConfigStore, swaps via
persistent_termatomically - Instance deduplication: same URL + chain = same instance across profiles
- Per-instance supervisor for shared infrastructure
- Started under
InstanceDynamicSupervisor - Children: CircuitBreaker (HTTP), CircuitBreaker (WS), WSConnection, InstanceSubscriptionManager
- Shared across all profiles using the same upstream
Health Monitoring
Lasso.Providers.ProbeCoordinator- Per-chain health probe coordinator (one per unique chain)
- 200ms tick cycle with exponential backoff on failures
- Probes one instance per tick to avoid thundering herd
- Writes health status to
:lasso_instance_stateETS
| Consecutive Failures | Backoff |
|---|---|
| 0-1 | 0 (probe on next tick) |
| 2 | 2 seconds |
| 3 | 4 seconds |
| 4 | 8 seconds |
| 5 | 16 seconds |
| 6+ | 30 seconds (capped) |
Provider Selection
Lasso.Providers.CandidateListing- Pure ETS reads (no GenServer) for minimal latency
- 7-stage filter pipeline (see Provider Selection)
- Returns candidates with availability, circuit state, and rate limit status
Profile-Scoped Supervision
ProfileChainSupervisor- Top-level dynamic supervisor for
(profile, chain)pairs - Enables independent lifecycle per configuration
- Hot-add/remove chains without restarts
- Per-
(profile, chain)supervisor providing policy isolation - Children:
- TransportRegistry: HTTP/WS channel discovery
- ClientSubscriptionRegistry: WebSocket fan-out to clients
- UpstreamSubscriptionPool: Multiplexes client subscriptions
- StreamSupervisor: Per-subscription continuity management
Profile System
Multi-tenancy via profiles enables isolated routing configurations:URL Routing
Profile Isolation
Each(profile, chain) pair runs in an isolated supervision tree:
- Independent circuit breakers
- Isolated metrics and benchmarking
- Separate rate limits
- Dedicated WebSocket subscriptions
- Provider instances (URL + chain hash)
- Circuit breakers (shared across profiles)
- WebSocket connections (shared across profiles)
- Block height tracking (shared across profiles)
Block Height Monitoring
Lasso tracks blockchain state using a dual-strategy approach:HTTP Polling (Always Running)
- Bounded observation delay (
probe_interval_ms) - Reliable foundation for lag calculation
- Continues during WebSocket failures
WebSocket Subscription (Optional)
- Sub-second block notifications when healthy
- Degrades gracefully to HTTP on failure
- Provider-specific via
subscribe_new_heads: true
(chain, instance_id) tracking:
Optimistic Lag Calculation
Compensates for observation delay on fast chains:WebSocket Subscription Management
Intelligent multiplexing with automatic failover:Multiplexing
100 clients subscribing toeth_subscribe("newHeads") share a single upstream subscription.
Failover with Gap-Filling
On provider failure mid-stream:- StreamCoordinator detects failure
- Computes gap:
last_seen_blockto current head - GapFiller backfills missed blocks via HTTP
- Injects backfilled events into stream
- Subscribes to new provider
Cluster Topology & Aggregation
When BEAM clustering is enabled, nodes form a topology-aware cluster:Node Lifecycle States
| State | Description |
|---|---|
:connected | Erlang distribution established |
:discovering | Region identification in progress |
:responding | Passes health checks, region known |
:unresponsive | Connected but failing health checks |
:disconnected | Previously connected, now offline |
Dashboard Event Streaming
EventStream aggregates real-time events for dashboard LiveViews:- Subscribes to: topology changes, routing decisions, circuit events, block sync
- Batches events (50ms intervals, max 100 per batch)
- Computes per-provider metrics grouped by region
- Broadcasts to LiveView subscribers
- Cache TTL: 15 seconds
- RPC timeout: 5 seconds
- Aggregation: Weighted averages by call volume
Performance Characteristics
Overhead
| Operation | Latency | Notes |
|---|---|---|
| Context creation | <1ms | Single struct allocation |
| Provider selection | 2-5ms | ETS lookups + scoring |
| Benchmarking update | <1ms | Async ETS write |
| Circuit breaker check | <0.1ms | GenServer call |
| Request observability | <5ms | Async logger |
| Total overhead | ~10ms | End-to-end added latency |
Scalability
- Concurrent requests: 10,000+ simultaneous (BEAM lightweight processes)
- Subscriptions per upstream: 1,000+ clients per upstream subscription
- Memory per request: <1KB (RequestContext + temporary state)
- ETS table scans: <1ms P99 (consensus height calculation)
Configuration
Profiles are loaded fromconfig/profiles/*.yml:
Next Steps
Routing Strategies
Learn about :fastest, :load_balanced, and :latency_weighted routing
Provider Selection
Understand the 7-stage filter pipeline
Circuit Breakers
Explore fault tolerance and automatic recovery
Profiles
Configure multi-tenant routing policies