New Relic's Infinite Tracing Processor is an implementation of the OpenTelemetry Collector tailsamplingprocessor. In addition to upstream features, it supports scalable and durabl distributed processing by using a distributed cache for shared state storage. This documentation how to configure it
Supported caches
The processor supports any Redis-compatible cache implementation. It has been tested and validated with Redis and Valkey in both single-instance and cluster configurations.
For production deployments, we recommend using cluster mode (sharded) to ensure high availability and scalability.
To enable distributed caching, add the distributed_cache configuration to your tail_sampling processor section:
tail_sampling: distributed_cache: connection: address: redis://localhost:6379/0 password: 'local' trace_window_expiration: 30s # Default: how long to wait after last span before evaluating processor_name: "itc" # Nane of the processor data_compression: format: lz4 # Optional: compression format (none, snappy, zstd, lz4); lz4 recommended중요
Configuration behavior: When distributed_cache is configured, the processor automatically uses the distributed cache for state management. If distributed_cache is omitted entirely, the collector will use in-memory processing instead.
The address parameter must specify a valid Redis-compatible server address using the standard format:
redis[s]://[[username][:password]@][host][:port][/db-number]Alternatively, you can embed credentials directly in the address parameter:
tail_sampling: distributed_cache: connection: address: redis://:yourpassword@localhost:6379/0The processor is implemented in Go and uses the go-redis client library.
Configuration parameters
The distributed_cache section supports the following parameters:
Connection settings
| Parameter | Type | Default | Description |
|---|---|---|---|
connection.address | string | required | Redis connection string (format: redis://host:port/db). For cluster mode, use comma-separated addresses (e.g., redis://node1:6379,redis://node2:6379) |
connection.password | string | "" | Redis password for authentication |
Data compression
| Parameter | Type | Default | Description |
|---|---|---|---|
data_compression | string | none | Compression algorithm for trace data. Options: none, snappy, zstd, lz4 |
팁
Compression tradeoffs:
none: No CPU overhead, highest Network and Redis memory usagesnappy: Fast compression/decompression, good compression ratiozstd: Best compression ratio, more CPU usagelz4: Very fast, moderate compression ratioCompression is mainly aimed at reducing network traffic which is the main bottleneck of the processors when connecting to redis
Trace management
| Parameter | Type | Default | Description |
|---|---|---|---|
trace_window_expiration | duration | 30s | How long to wait for spans before evaluating a trace |
traces_ttl | duration | 5m | Time-to-live for trace data in Redis |
cache_ttl | duration | 30m | Time-to-live for sampling decisions |
processor_name | string | "" | processor name for Redis keys and metrics (useful for multi-tenant deployments) |
TTL guidelines:
traces_ttlshould be long enough to handle retries and late spanscache_ttlshould be much longer thantraces_ttlto handle late-arriving spans- Longer
cache_ttlreduces duplicate evaluations but increases Redis memory usage
Partitioning
| Parameter | Type | Default | Description |
|---|---|---|---|
partitions | int | 6 | Number of partitions for load distribution across Redis |
partition_workers | int | 6 | Number of concurrent evaluation workers |
Partitioning benefits:
- Distributes load across multiple Redis key ranges
- Enables parallel evaluation across multiple workers
- Improves throughput in multi-collector deployments
팁
Partition scaling: A partition is a logical shard of trace data in Redis that enables horizontal scaling. Traces are assigned to partitions using a hashing algorithm on the trace ID.
Important: partitions should be ideally 3x times the number of Redis nodes needed for your workload with avgerage load. partition_workers should typically be less than or equal to the number of partitions.
Ingestion settings
| Parameter | Type | Default | Description |
|---|---|---|---|
ingestion_workers | int | 6 | Number of goroutines processing traces from the shared ingestion channel |
ingestion_buffer_size | int | 10000 | Capacity of the shared ingestion channel for buffering incoming traces |
ingestion_channel_timeout | duration | 500ms | Maximum time to wait when sending traces to the ingestion channel. If exceeded, traces are dropped |
ingestion_response_timeout | duration | 10s | Maximum time to wait for a worker to process and respond. Prevents indefinite blocking if workers are stuck |
hashing_strategy | string | rendezvous | Hashing algorithm for partition selection. Options: rendezvous (recommended, 3x faster) or consistent |
Ingestion architecture:
The processor uses a shared channel with configurable workers for trace ingestion:
- Incoming traces are sent to a shared buffered channel
- Multiple workers pull from the channel and route traces to appropriate partitions
- Workers hash trace IDs using the configured hashing strategy to determine partition assignment
Configuration guidelines:
- Buffer Size: Should absorb traffic bursts.
- Workers: Number of concurrent goroutines processing traces.
- Channel Timeout: How long to wait if buffer is full. Short timeout (500ms) fails fast on saturation
- Response Timeout: Protects against stuck workers. Default: 10s is appropriate for normal Redis operations
- Hashing Strategy: Algorithm for determining trace partition assignment
rendezvous(default): Provides superior load distribution for 2-99 partitions. Best choice for typical deployments.consistent: Maintains performance when using 100+ partitions where rendezvous becomes slow. Trades slightly less optimal load distribution for better performance at scale.- Both strategies ensure the same trace always maps to the same partition (deterministic)
- Choose rendezvous for better load distribution (up to 99 partitions), consistent for performance at scale (100+)
Evaluation settings
| Parameter | Type | Default | Description |
|---|---|---|---|
evaluation_interval | duration | 1s | How often to check for traces ready for evaluation |
max_traces_per_batch | int | 1000 | Maximum number of traces to evaluate per batch |
rate_limiter | bool | false | Enable blocking rate limiter for concurrent trace processing |
num_traces | int | 50000 | if rate_limiter is enabled, it uses the num_traces as max number of concurrent processing traces |
Rate limiter:
The rate_limiter option controls backpressure behavior when the concurrent trace limit (num_traces) is reached:
false(default): No rate limiting. The processor accepts traces without blocking, relying on Redis for storage. This is the recommended setting for most Redis deployments.true: Enables a blocking rate limiter that applies backpressure whennum_tracesconcurrent traces are being processed. New traces will block until a slot becomes available.
When to enable:
- To prevent overwhelming Redis network, cpu and/or memory
- To prevent overwhelming downstream consumers with sudden traffic bursts
Retry and recovery
| Parameter | Type | Default | Description |
|---|---|---|---|
max_retries | int | 2 | Maximum retry attempts for failed trace evaluations |
in_flight_timeout | duration | Same as trace_window_expiration | Timeout for in-flight batch processing before considered orphaned |
recover_interval | duration | 5s | How often to check for orphaned batches |
중요
Orphan recovery: Orphaned batches occur when a collector crashes mid-evaluation. The orphan recovery process re-queues these traces for evaluation by another collector instance.
Policy configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
policies | array | required | Sampling policy definitions |
They follow the same rules as in the open source tail sampling.
Redis client timeouts and connection pool
All settings are optional and have defaults aligned with the 10s ingestion_response_timeout.
| Parameter | Type | Default | Description |
|---|---|---|---|
connection.dial_timeout | duration | 5s | Timeout for establishing new connections to Redis |
connection.read_timeout | duration | 3s | Timeout for socket reads. Commands fail with timeout error if exceeded |
connection.write_timeout | duration | 3s | Timeout for socket writes. Commands fail with timeout error if exceeded |
connection.pool_timeout | duration | 4s | Time to wait for connection from pool if all connections are busy |
connection.pool_size | int | 10 * cores | Base number of socket connections |
connection.min_idle_conns | int | 0 | Minimum number of idle connections which is useful when establishing new connection is slow. The idle connections are not closed by default. |
connection.max_idle_conns | int | 0 | Maximum number of connections allocated by the pool at a given time. 0 no limit |
connection.conn_max_idle_time | duration | 30m | Maximum amount of time a connection may be idle. Should be less than server's timeout. |
connection.conn_max_lifetime | duration | 0m | Maximum amount of time a connection may be reused. |
connection.max_retries | int | 3 | Maximum number of command retries before giving up |
connection.min_retry_backoff | duration | 8ms | Minimum backoff between retries |
connection.max_retry_backoff | duration | 512ms | Maximum backoff between retries (exponential backoff capped at this value) |
Tuning guidelines:
- High-latency Redis (cross-region, VPN): Increase timeouts to 2-3x defaultsand reduce
max_retriesto 2 - Very fast Redis (same host/rack): Can reduce timeouts further (e.g., 250ms) for faster failure detection
- High throughput: Increase
pool_sizeto 30-50 to avoid connection pool exhaustion - Unreliable network: Increase
max_retriesto 5-7 and adjust backoff settings
Cluster replica options
The connection.replica section controls cluster replica routing.
| Parameter | Type | Default | Description |
|---|---|---|---|
connection.replica.read_only_replicas | bool | true | Enable routing read commands to replica nodes. Default is true for improved scalability. |
connection.replica.route_by_latency | bool | false | Route commands to the closest node based on latency (automatically enables read_only_replicas) |
connection.replica.route_randomly | bool | false | Route commands to a random node (automatically enables read_only_replicas) |
팁
Replica read benefits: When running with a Redis cluster that has replica nodes, enabling replica reads distributes read load across both primary and replica nodes, significantly improving read throughput and reducing load on primary nodes.
Important considerations:
- Cluster-only: These options only work with Redis cluster deployments with replicas per shard
Complete configuration example
processors: tail_sampling: num_traces: 5_000_000 distributed_cache: # Connection connection: address: "redis://redis-cluster:6379/0" password: "your-redis-password"
# Connection pool settings (optional - tune for your environment) pool_size: 30 read_timeout: 2s write_timeout: 2s pool_timeout: 5s max_retries: 5
# Replica read options (cluster mode only) replica: read_only_replicas: true # Default: enabled for improved scalability route_by_latency: true # Route to closest node (recommended)
# Compression data_compression: snappy
# Trace Management trace_window_expiration: 30s traces_ttl: 2m # 120s (allow extra time for retries) cache_ttl: 1h # 3600s (keep decisions longer) processor_name: "prod-cluster-1"
# Retry and Recovery max_retries: 3 in_flight_timeout: 45s recover_interval: 10s
# Evaluation evaluation_interval: 1s max_traces_per_batch: 10000 rate_limiter: false # Recommended for Redis mode
# Partitioning partitions: 8 partition_workers: 8 partition_buffer_max_traces: 1000
# Ingestion ingestion_workers: 12 # 1.5 workers per partition ingestion_buffer_size: 40000 # 40k trace buffer ingestion_channel_timeout: 500ms ingestion_response_timeout: 10s hashing_strategy: rendezvous # default, best for less than 100 partitions
# Sampling policies policies: - name: errors type: status_code status_code: {status_codes: [ERROR]} - name: slow-traces type: latency latency: {threshold_ms: 1000} - name: sample-10-percent type: probabilistic probabilistic: {sampling_percentage: 10}Trace evaluation
This section covers the parameters that control when traces are evaluated and how long data persists in Redis.
Evaluation timing and frequency
How evaluation works:
- Every
evaluation_interval, workers check for traces that have been idle for at leasttrace_window_expiration - Up to
max_traces_per_batchtraces are pulled from Redis per evaluation cycle partition_workersevaluate batches concurrently across partitions
Tuning guidance:
- Faster decisions: Decrease
evaluation_interval(e.g., 500ms) for lower latency, but increases Redis load - Higher throughput: Increase
max_traces_per_batch(e.g., 5000-10000) to process more traces per cycle - More parallelism: Increase
partition_workersto match available CPU cores
TTL and expiration
How TTL works in distributed mode
When using distributed_cache, the processor implements a multi-stage TTL system that differs from the in-memory processor:
Trace lifecycle stages:
Collection phase: Spans arrive and are stored in Redis
Evaluation phase: After
trace_window_expiration, the trace is ready for sampling decisionRetention phase: Trace data persists for
traces_ttlto handle retries and late spansCache phase: Sampling decisions persist for
cache_ttlto prevent duplicate evaluations중요
Key difference from in-memory mode: The
trace_window_expirationparameter replacesdecision_waitand implements a sliding window approach:- Each time new spans arrive for a trace, the evaluation timer resets
- Traces with ongoing activity stay active longer than traces that have stopped receiving spans
- This dynamic behavior better handles real-world span arrival patterns
Why cascading TTLs matter:
The TTL hierarchy ensures data availability throughout the trace lifecycle:
trace_window_expiration (30s) ↓ [trace ready for evaluation]in_flight_timeout (30s default) ↓ [evaluation completes or times out]traces_ttl (5m) ↓ [trace data deleted from Redis]cache_ttl (30m) ↓ [decision expires, late spans re-evaluated]trace_window_expiration(shortest) controls when evaluation beginsin_flight_timeout(shortest) controls when evaluation is taking too long and it must be retriedcache_ttl(longest) handles late-arriving spans hours after evaluationtraces_ttl(medium) provides buffer for retries and orphan recovery
Properly configured TTLs prevent data loss, duplicate evaluations, and incomplete traces while optimizing Redis memory usage.
팁
Configuration principle: Each TTL should be significantly longer than the one before it (typically 5-10x). This creates safety buffers that account for processing delays, retries, and late-arriving data.
1. Trace collection window: trace_window_expiration
Default: 30s | Config: distributed_cache.trace_window_expiration
- Purpose: Controls when a trace is ready for sampling evaluation
- Behavior: Sliding window that resets each time new spans arrive for a trace
- Example: If a trace receives spans at t=0s, t=15s, and t=28s, evaluation begins at t=58s (28s + 30s window)
Tuning guidance:
- Shorter values (15-20s): Faster sampling decisions, but risk of incomplete traces if spans arrive slowly
- Longer values (45-60s): More complete traces, but higher latency and memory usage
- Typical range: 20-45 seconds depending on your span arrival patterns
2. Batch processing timeout: in_flight_timeout
Default: Same as trace_window_expiration | Config: distributed_cache.in_flight_timeout
- Purpose: Maximum time a batch can be in processing before being considered orphaned
- Behavior: Prevents data loss if a collector crashes during evaluation
- Orphan recovery: Batches exceeding this timeout are automatically re-queued for evaluation by another collector
Tuning guidance:
Should be ≥
trace_window_expiration: Ensures enough time for normal evaluationIncrease if: Your evaluation policies are computationally expensive (complex OTTL, regex)
Monitor:
otelcol_processor_tail_sampling_sampling_decision_timer_latencyto ensure evaluations complete within this window팁
Relationship with trace_window_expiration: Setting
in_flight_timeoutequal totrace_window_expirationworks well for most deployments. Only increase if you observe frequent orphaned batch recoveries due to slow policy evaluation.
3. Trace data retention: traces_ttl
Default: 5m | Config: distributed_cache.traces_ttl
- Purpose: How long trace span data persists in Redis after initial storage
- Behavior: Provides buffer time for retries, late spans, and orphan recovery
- Critical constraint: Must be significantly longer than
trace_window_expiration+in_flight_timeout
Recommended formula:
traces_ttl ≥ (trace_window_expiration + in_flight_timeout + max_retries × evaluation_interval) × 2Example with defaults:
traces_ttl ≥ (30s + 30s + 2 retries × 1s) × 2 = 124s ≈ 5m ✅Tuning guidance:
Memory-constrained: Use shorter TTL (2-3m) but risk losing data for very late spans
Late span tolerance: Use longer TTL (10-15m) to handle delayed span arrivals
Standard production: 5-10 minutes provides good balance
중요
Too short = data loss: If
traces_ttlis too short, traces may be deleted before evaluation completes, especially during retries or orphan recovery. This results in partial or missing traces.
4. Decision cache retention: cache_ttl
Default: 30m | Config: distributed_cache.cache_ttl
- Purpose: How long sampling decisions (sampled/not-sampled) are cached
- Behavior: Prevents duplicate evaluation when late spans arrive after trace has been evaluated
- Critical constraint: Must be much longer than
traces_ttl
Recommended formula:
cache_ttl ≥ traces_ttl × 6Why much longer?
- Late-arriving spans can arrive minutes or hours after the trace completed
- Decision cache prevents re-evaluating traces when very late spans arrive
- Without cached decision, late spans would be evaluated as incomplete traces (incorrect sampling decision)
Tuning guidance:
- Standard production: 30m-2h balances memory usage and late span handling
- High late-span rate: 2-4h ensures decisions persist for very delayed data
- Memory-constrained: 15-30m minimum, but expect more duplicate evaluations
Memory impact:
Each decision: ~50 bytes per trace ID
At 10,000 spans/sec with 20 spans/trace → 500 traces/sec
30-minute cache: ~900,000 decisions × 50 bytes = ~45 MB
2-hour cache: ~3.6M decisions × 50 bytes = ~180 MB
팁
Monitor cache effectiveness: Track
otelcol_processor_tail_sampling_early_releases_from_cache_decisionmetric. High values indicate the cache is preventing duplicate evaluations effectively.
TTL configuration examples
Low-latency, memory-constrained:
distributed_cache: trace_window_expiration: 20s in_flight_timeout: 20s traces_ttl: 2m cache_ttl: 15m evaluation_interval: 500ms max_traces_per_batch: 2000High-throughput, late-span tolerant:
distributed_cache: trace_window_expiration: 45s in_flight_timeout: 60s traces_ttl: 10m cache_ttl: 2h evaluation_interval: 1s max_traces_per_batch: 10000Balanced production (recommended):
distributed_cache: trace_window_expiration: 30s in_flight_timeout: 45s # Extra buffer for complex policies traces_ttl: 5m cache_ttl: 30m evaluation_interval: 1s max_traces_per_batch: 5000Retry and recovery
Orphan recovery:
Orphaned batches occur when a collector crashes mid-evaluation. The orphan recovery process runs every recover_interval and:
- Identifies batches that have exceeded
in_flight_timeout - Re-queues these traces for evaluation by another collector instance
- Ensures no traces are lost due to collector failures
Tuning guidance:
- Increase
max_retries(3-5) if experiencing transient Redis errors - Decrease
recover_interval(2-3s) for faster recovery in high-availability environments - Monitor recovery metrics to identify if collectors are crashing frequently
Partitioning and scaling
What is a partition?
A partition is a logical shard of trace data in Redis that enables parallel processing and horizontal scaling. Think of partitions as separate queues where traces are distributed based on their trace ID.
Key concepts:
Each partition maintains its own pending traces queue in Redis
Traces are assigned to partitions using a configurable hashing strategy (rendezvous or consistent) on the trace ID
Each partition can be processed independently and concurrently
Partitions enable both vertical scaling (more CPU cores) and horizontal scaling (more collector instances)
주의
Important: Changing the number of partitions when there's a cluster already running will cause fragmented traces, since traces might be routed to another partition after the change.
How partitioning works
Incoming Traces | v┌─────────────────────────────┐│ Hashing Strategy │ trace_id → rendezvous or consistent hash│ (rendezvous by default) │└─────────────────────────────┘ | ├──────────┬──────────┬──────────┐ v v v v┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐│Partition│ │Partition│ │Partition│ │Partition││ 0 │ │ 1 │ │ 2 │ │ 3 ││ (Redis) │ │ (Redis) │ │ (Redis) │ │ (Redis) │└─────────┘ └─────────┘ └─────────┘ └─────────┘ | | | | v v v v┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐│ Worker │ │ Worker │ │ Worker │ │ Worker ││ 0 │ │ 1 │ │ 2 │ │ 3 ││(Goroutine)│(Goroutine)│(Goroutine)│(Goroutine)│└─────────┘ └─────────┘ └─────────┘ └─────────┘ | | | | └──────────┴──────────┴──────────┘ | v Sampled TracesFlow:
- Ingestion: Trace ID is hashed using the configured hashing strategy to determine partition assignment
- Storage: Trace data stored in Redis under partition-specific keys
- Evaluation: Worker assigned to that partition pulls and evaluates traces
- Concurrency: All partition workers run in parallel, processing different traces simultaneously
Hashing strategy
The processor supports two hashing algorithms for partition selection. The choice depends on the number of partitions:
| Strategy | Load Distribution | Performance | Best For |
|---|---|---|---|
rendezvous (default) | Superior load balancing | Fast for up to 99 partitions | Standard deployments (2-99 partitions) - best load distribution for typical production workloads |
consistent | Good distribution | Maintains performance with 100+ partitions | Very large scale (100+ partitions) - preserves performance when rendezvous becomes slow |
중요
Key characteristics: Both strategies are deterministic - the same trace always maps to the same partition Rendezvous provides better load distribution but requeries more cpu with high number of partitions
Choosing the right strategy:
- Rendezvous (default): Use for deployments with up to 100 partitions. Provides superior load distribution for the vast majority of production workloads.
- Consistent: Use when scaling to 100+ partitions where rendezvous becomes cpu intenside.
주의
Important: Changing the hashing algorithm when there's a cluster already running will cause fragmented traces, since traces might be routed to another partition after the change.
Partition configuration parameters
Use partitions to control how many logical shards you have and partition_workers to set how many workers process them:
distributed_cache: partitions: 8 # Number of logical shards in Redis partition_workers: 8 # Number of workers processing partitionsWorker behavior:
- 8 partitions + 8 workers: Each worker processes one partition every
evaluation_interval✅ Balanced - 8 partitions + 16 workers: Each partition evaluated twice per interval (redundant, wastes resources)
- 8 partitions + 4 workers: Only half the partitions evaluated per interval (slower, but less Redis load)
팁
Tuning tip: Setting fewer workers per instance (partition_workers < partitions) reduces stress on Redis and the collector, useful when running many collector instances.
Partition sizing guidelines
| Scenario | Partitions | Partition Workers | Reasoning |
|---|---|---|---|
| Development | 2-4 | 2-4 | Minimal overhead, easy debugging |
| Standard Production (15k spans/sec) | 4-12 | 4-12 | Balanced |
| High Volume (moe than 100k spans/sec) | 12-48 | 12-48 | Maximize throughput |
중요
Important sizing rules:
partitionsshould be at least 2x 3x the number of Redis nodes needed for your average workloadpartition_workersshould typically be ≤partitions- Changing partition count loses existing data - traces cannot be located after partition count changes
Partition configuration examples
Single collector (4-core machine):
distributed_cache: partitions: 4 partition_workers: 4 partition_buffer_max_traces: 5000Multi-collector (3 instances, 8-core each):
distributed_cache: partitions: 12 # 3x more than single collector partition_workers: 6 # Each collector processes 6 partitions partition_buffer_max_traces: 10000High-volume (10+ collectors):
distributed_cache: partitions: 24 partition_workers: 4 # Fewer per collector to share load partition_buffer_max_traces: 20000Sizing and performance
주의
Critical bottlenecks: Redis performance for tail sampling is primarily constrained by Network and CPU, not memory. Focus your sizing and optimization efforts on:
- Network throughput and latency between collectors and Redis
- CPU capacity: Redis CPU consumption
- Memory capacity: Typically sufficient if CPU and network are properly sized
Example: assume the following parameters
- Spans per second: assumes 10,000 spans/sec throughput
- Average span size: 900 bytes
1. Network requirements
Bandwidth calculations:
For 10,000 spans/sec at 900 bytes per span:
- Ingestion traffic (collectors → Redis):
10,000 × 900 bytes = 9 MB/sec = ~72 Mbps - Evaluation traffic (Redis → collectors):
~9 MB/sec = ~72 Mbps(reading traces for evaluation) - Total bidirectional:
~18 MB/sec = ~144 Mbps
With 25% compression (snappy/lz4):
- Compressed traffic:
~108 Mbpsbidirectional
Network guidelines:
- Monitor Redis Network Usage: A typical redis instance can handle up to 1GBs, make sure to monitor the network usage
- Use compressiong: It reduces the number of network traffic in exchange for cpu usage in the collectors
- Co-located (same datacenter/VPC): 1 Gbps network interfaces are sufficient for most workloads
- Cross-region: Expect 10-50ms latency - increase timeouts and use compression to reduce bandwidth
- Connection pooling: Increase for higher throughput
- Use replicas: If the cluster has read replicas, they will we used by default. Reducing network and cpu usage on master nodes
2. CPU requirements
CPU guidelines:
Single Redis instance: Minimum 4 vCPUs
Redis cluster: 3+ nodes with read replicas with 4 vCPUs each for high troughtput.
Use replicas: If the cluster has read replicas, they will we used by default. Reducing network and cpu usage on master nodes
팁
Monitoring CPU: Watch for CPU saturation (more than 80% utilization) as the first indicator of scaling needs. If CPU-bound, either add cluster nodes
3. Memory requirements
While memory is less constrained than CPU and network, proper sizing prevents evictions and ensures data availability.
Memory estimation formula
Total Memory = (Trace Data) + (Decision Caches) + (Overhead)Trace data storage
Trace data is stored in Redis for the full traces_ttl period to support late-arriving spans and trace recovery:
Per-span storage:
~900 bytes(marshaled protobuf)Storage duration: Controlled by
traces_ttl(default: 1 hour)Active collection window: Controlled by
trace_window_expiration(default: 30s)Formula:
Memory ≈ spans_per_second × traces_ttl × 900 bytes중요
Active window vs. full retention: Traces are collected during a
~30-secondactive window (trace_window_expiration), but persist in Redis for the full 1-hourtraces_ttlperiod. This allows the processor to handle late-arriving spans and recover orphaned traces. Your Redis sizing must account for the full retention period, not just the active window.
Example calculation: At 10,000 spans/second with 1-hour traces_ttl:
10,000 spans/sec × 3600 sec × 900 bytes = 32.4 GBWith lz4 compression (we have observed 25% reduction):
32.4 GB × 0.75 = 24.3 GBNote: This calculation represents the primary memory consumer. Actual Redis memory may be slightly higher due to decision caches and internal data structures.
Decision cache storage
When using distributed_cache, the decision caches are stored in Redis without explicit size limits. Instead, Redis uses its native LRU eviction policy (configured via maxmemory-policy) to manage memory. Each trace ID requires approximately 50 bytes of storage:
Sampled cache: Managed by Redis LRU eviction
Non-sampled cache: Managed by Redis LRU eviction
Typical overhead per trace ID:
~50 bytes팁
Memory management: Configure Redis with
maxmemory-policy allkeys-lruto allow automatic eviction of old decision cache entries when memory limits are reached. The decision cache keys use TTL-based expiration (controlled bycache_ttl) rather than fixed size limits.
Batch processing overhead
- Current batch queue: Minimal (trace IDs + scores in sorted set)
- In-flight batches:
max_traces_per_batch × average_spans_per_trace × 900 bytes
Example calculation: 500 traces per batch (default) with 20 spans per trace on average:
500 × 20 × 900 bytes = 9 MB per batchBatch size impacts memory usage during evaluation. In-flight batch memory is temporary and released after processing completes.
Default configuration architecture
The default configuration values are designed for a reference deployment supporting 1 million spans per minute (~16,000 spans/sec):
Collector deployment:
- 3 collector instances
- 4 vCPUs per instance
- 8 GB RAM per instance
Redis cluster:
- 3 Redis instances (AWS cache.r6g.xlarge: 4 vCPUs, 25.01 GiB memory each)
- Configured as a cluster for high availability and load distribution
- Co-located with collectors for low-latency access
This reference architecture provides a starting point for production deployments. Adjust based on your actual throughput and latency requirements.
Metrics reference
The tail sampling processor emits the following metrics in Redis-distributed mode to help you monitor performance and diagnose issues.
Available metrics
| Metric Name | Dimensions | Description | Use Case |
|---|---|---|---|
otelcol_processor_tail_sampling_batches | partition, processor | Number of batch operations | Monitor batch processing rate across partitions |
otelcol_processor_tail_sampling_sampling_decision_timer_latency | partition, processor | Sampling decision timer latency (ms) | Track overall evaluation performance per partition |
otelcol_processor_tail_sampling_sampling_policy_evaluation_error | partition, processor | Policy evaluation error count | Detect policy configuration issues |
otelcol_processor_tail_sampling_count_traces_sampled | policy, decision, partition, processor | Count of traces sampled/not sampled per policy | Track per-policy sampling decisions |
otelcol_processor_tail_sampling_count_spans_sampled | policy, decision, partition, processor | Count of spans sampled/not sampled per policy | Span-level sampling statistics |
otelcol_processor_tail_sampling_global_count_traces_sampled | decision, partition, processor | Global count of traces sampled by at least one policy | Overall sampling rate monitoring |
otelcol_processor_tail_sampling_early_releases_from_cache_decision | sampled | Spans immediately released due to cache hit | Decision cache effectiveness |
otelcol_processor_tail_sampling_new_trace_id_received | partition, processor | Count of new traces received | Trace ingestion rate per partition |
otelcol_processor_tail_sampling_new_span_received | partition, processor | Count of new spans received | Span ingestion rate per partition |
otelcol_processor_tail_sampling_traces_dropped | partition, processor | Traces dropped due to saving errors | Error detection and troubleshooting |
otelcol_processor_tail_sampling_spans_dropped | partition, processor | Spans dropped due to saving errors | Error detection and troubleshooting |
otelcol_processor_tail_sampling_count_traces_deleted | deleted, partition, processor | Count of traces deleted from storage | Cleanup monitoring |
Dimension details
policy: Name of the sampling policy that made the decisionsampled: Whether the decision was to sample (true/false)decision: The sampling decision type (sampled,not_sampled,dropped)deleted: Whether deletion was successful (true/false)partition: Partition identifier (hex-encoded hash, e.g.,{a1b2c3d4...}) - ensures Redis Cluster hash tag compatibilityprocessor: Processor instance identifier (fromdistributed_cache.processor_nameconfig)
팁
Partition identifiers: Partition values are deterministic SHA256 hashes of the partition index combined with the processor name. Check collector logs at startup to see the mapping of partition indices to hash values.
Redis-compatible cache requirements
The processor uses the cache as distributed storage for the following trace data:
- Trace and span attributes
- Active trace data
- Sampling decision cache
The processor executes Lua scripts to interact with the Redis cache atomically. Lua script support is typically enabled by default in Redis-compatible caches. No additional configuration is required unless you have explicitly disabled this feature.