Documentation Index
Fetch the complete documentation index at: https://docs.lasso.sh/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Lasso tracks RPC performance metrics passively by recording every request’s latency and result. These metrics power intelligent routing decisions, provider leaderboards, and cluster-wide dashboards.BenchmarkStore
BenchmarkStore is a GenServer that maintains per-chain ETS tables tracking RPC call performance. Storage model:- RPC table (bag): Raw call data with dual timestamps (monotonic + system)
- Score table (set): Aggregated metrics per
{provider_id, method, :rpc}key
Recording RPC Calls
Every RPC request records its performance::success- Request completed successfully:error- RPC error response:timeout- Request timeout:network_error- Connection failure:rate_limit- Rate limit error
Dual Timestamp Design
All metrics use both monotonic and system timestamps:- Monotonic time: Accurate intervals immune to clock adjustments
- System time: Wall-clock correlation for display and exports
ETS Table Structure
RPC metrics table (:rpc_metrics_{profile}_{chain}):
:provider_scores_{profile}_{chain}):
Score Calculation
Provider scores combine success rate, latency, and call volume:success_rate: 0.0 to 1.0 (e.g., 0.99 = 99% success)latency_factor: 1000 / (1000 + latency), favors lower latencyconfidence_factor: log10(calls), reduces variance from low-volume providers
| Provider | Success | Latency | Calls | Score |
|---|---|---|---|---|
| llamarpc | 99% | 100ms | 1000 | 2.97 |
| alchemy | 99% | 150ms | 1000 | 2.91 |
| infura | 95% | 120ms | 100 | 1.93 |
Provider Leaderboard
Get provider rankings sorted by performance:Percentile Calculation
Latency percentiles computed fromrecent_latencies (last 100 samples):
Method-Specific Performance
Get performance metrics for a specific RPC method:Bulk Method Performance
Get all method performance data for a chain:Cluster Aggregation
In clustered deployments, MetricsStore aggregates metrics from all nodes.Weighted Averages
Metrics are weighted by call volume to prevent skew:Per-Node Breakdown
Cluster metrics include per-node latency comparison:Minimum Call Threshold
Providers need ≥10 calls to be included in aggregated metrics:- Prevents skew from nodes that just started
- Ensures statistical significance
- Cold-start indicator when threshold not met
Telemetry Metrics
Lasso emits telemetry events for all operational metrics.Metric Definitions
Defined inLasso.Telemetry.metrics/0 for LiveDashboard:
Event Emission
Events emitted at key points in request lifecycle:Custom Telemetry Handlers
Attach custom handlers to route metrics to external systems:Data Retention
Automatic Cleanup
BenchmarkStore cleans up old data periodically:- Periodic: Every hour (for all chains)
- Size-based: When table exceeds
@max_entries_per_chain
Manual Cleanup
Trigger cleanup manually:Persistence (Optional)
Hourly snapshots can be saved for long-term analysis:Performance Characteristics
Memory Usage
Query Performance
| Operation | Complexity | Latency |
|---|---|---|
| Record call | O(1) | <1ms |
| Get leaderboard | O(providers) | <5ms |
| Get method perf | O(1) | <1ms |
| Get all methods | O(providers × methods) | <10ms |
| Cleanup | O(entries) | <100ms |
Summary
Lasso’s metrics system provides:- Passive benchmarking via BenchmarkStore (no active probing)
- Method-specific metrics with percentiles (p50, p90, p95, p99)
- Provider leaderboards sorted by composite score
- Cluster aggregation with weighted averages and per-node breakdown
- Telemetry integration for custom monitoring
- Automatic cleanup with configurable retention (24 hours default)
- Low overhead (<1ms per request, <1KB per entry)