Why Lasso
Choosing a single RPC provider has real UX and reliability consequences. Performance varies by region, method, and hour, and API inconsistencies make a “one URL” setup brittle. Lasso makes the RPC layer programmable and resilient:- Geo-distributed proxy where RPC requests are routed to the closest Lasso node
- Each node independently measures real latencies and health
- Route each call to the best provider for that region, chain, method, and transport
- Get redundancy without brittle application code
- Scale throughput by adding providers instead of replatforming\
See it Live: Lasso.sh
These docs are core the Core Lasso engine (OSS). Lasso.sh add full key management and agentic-driven auth & management apis. See the agent docs at lasso.sh/llms.txtQuickstart
Get Lasso running in under 5 minutes
Installation
Local setup, Docker, and prerequisites
Configuration
Profiles, chains, providers, and routing strategies
API Reference
Complete HTTP and WebSocket endpoint documentation
Key features
Multi-provider, multi-chain routing
Proxy JSON-RPC requests across multiple providers with automatic failover. Support for HTTP + WebSocket on Ethereum, Base, Arbitrum, and any EVM chain.Routing strategies
Choose how requests are routed to providers:fastest- Route to the lowest latency provider for each methodload-balanced- Distribute requests across healthy providerslatency-weighted- Weighted random selection by latency scores- Provider override - Route directly to a specific provider
Method-aware benchmarking
Latency is tracked per provider × method × transport. Different providers excel at different workloads—Lasso learns which provider is fastest foreth_getLogs vs eth_call vs eth_blockNumber.
Resilience built-in
- Circuit breakers prevent cascade failures
- Automatic retries and transport-aware failover
- Health probing excludes unhealthy providers from routing
- WebSocket gap-filling backfills missed events via HTTP on upstream failure
WebSocket subscriptions
Multiplexing with optional gap-filling:newHeads share a single upstream subscription. On provider failure, Lasso backfills missed blocks via HTTP and seamlessly switches providers.
Profiles for multi-tenancy
Isolated configs, state, and metrics for dev/staging/prod, multi-tenant setups, or experiments:LiveView dashboard
Real-time observability with provider status, routing decisions, latency metrics, and cluster-wide aggregation:- Provider health and circuit breaker state
- Per-method latency distributions
- Request/error rates
- Regional drill-down for geo-distributed deployments
http://localhost:4000/dashboard
Cluster aggregation (optional)
For geo-distributed deployments, Lasso nodes form a cluster that aggregates observability data across all regions:- Unified visibility into provider performance and health
- Regional drill-down to compare provider latency by geography
- No routing latency impact—clustering is purely for observability
Clustering is optional. A single node works great standalone.
Built with Elixir/OTP
Lasso runs on the BEAM (Erlang VM) for:- Massive concurrency - 10,000+ simultaneous requests via lightweight processes
- Fault isolation - OTP supervision trees keep failures contained
- Distributed by design - Native clustering and remote messaging
- Fast in-memory state - ETS provides efficient shared state for routing decisions
Next steps
Quickstart
Get your first RPC request proxied in 5 minutes
Architecture
Understand how Lasso works under the hood