• /
  • EnglishEspañolFrançais日本語한국어Português
  • 로그인지금 시작하기

Filter processor

The filter processor drops telemetry records or specific attributes based on OTTL (OpenTelemetry Transformation Language) boolean expressions. Use it to remove test data, debug logs, health checks, or any low-value telemetry before it leaves your network.

When to use filter processor

Use the filter processor when you need to:

  • Drop PII or test environment data: Remove data that shouldn't leave your network
  • Remove debug-level logs from production: Filter by severity to reduce noise
  • Filter out health check requests: Drop repetitive, low-value monitoring traffic
  • Drop metrics with specific prefixes or patterns: Remove unnecessary metric streams
  • Remove low-value telemetry based on attributes: Filter by service name, environment, or custom tags

How filter processor works

The filter processor evaluates OTTL boolean expressions against each telemetry record. When a condition evaluates to true, the record is dropped.

This is the opposite of many query languages where WHERE status = 'ERROR' means "keep errors." In filter processor, status == 'ERROR' means "drop errors."

Configuration

Add a filter processor to your pipeline:

filter/Logs:
description: Apply drop rules and data processing for logs
config:
error_mode: ignore
rules:
- name: drop the log records
description: drop all records which has severity text INFO
conditions:
- log.severity_text == "INFO"

Config fields:

  • rules: Array of filtering rules evaluated in order.
    • name: Rule identifier.
    • context: The type of data to evaluate. Supported values: log, span, span_event, metric, datapoint.
    • conditions: A list of OTTL boolean expressions.

Multiple conditions: When you provide multiple expressions in the array, they are evaluated with OR logic. If any condition is true, the record is dropped.

OTTL boolean operators

Comparison operators

  • == - Equal to
  • != - Not equal to
  • < - Less than
  • <= - Less than or equal to
  • > - Greater than
  • >= - Greater than or equal to

Logical operators

  • and - Both conditions must be true
  • or - Either condition must be true
  • not - Negates a condition

Pattern matching

  • IsMatch - Regex pattern matching
rules:
- name: match-health-logs
conditions:
- 'IsMatch(body, ".*health.*") or IsMatch(attributes["http.url"], ".*\\/api\\/v1\\/health.*")'

Complete examples

Example 1: Drop test environment data

Remove all telemetry from test and development environments:

config:
rules:
- name: drop-test-environment
description: Drop logs from test environment
conditions:
- 'resource.attributes["environment"] == "test"'
context: log
- name: drop-dev-environment
description: Drop logs from dev environment
conditions:
- 'resource.attributes["environment"] == "dev"'
context: log

Example 2: Drop debug logs in production

Keep only meaningful log levels in production:

config:
rules:
- name: drop-debug-logs
description: Drop all DEBUG severity logs
conditions:
- 'severity_text == "DEBUG"'
context: log
- name: drop-low-severity-logs
description: Drop INFO and below severity logs
conditions:
- "severity_number < 9"
context: log

Severity number reference:

  • TRACE = 1-4
  • DEBUG = 5-8
  • INFO = 9-12
  • WARN = 13-16
  • ERROR = 17-20
  • FATAL = 21-24

Example 3: Drop health check spans

Remove health check traffic that adds no diagnostic value:

config:
rules:
- name: drop-health-endpoint
description: Drop spans from /health endpoint
conditions:
- 'attributes["http.path"] == "/health"'
context: span
- name: drop-health-check-spans
description: Drop spans named health_check
conditions:
- 'name == "health_check"'
context: span

Example 4: Drop by service name

Filter out specific services or service patterns:

config:
rules:
- name: drop-legacy-api
description: Drop logs from legacy API v1 service
conditions:
- 'resource.attributes["service.name"] == "legacy-api-v1"'
context: log
- name: drop-canary-services
description: Drop logs from canary deployment services
conditions:
- 'IsMatch(resource.attributes["service.name"], ".*-canary")'
context: log

Example 5: Drop metrics with specific prefixes

Remove unnecessary metric streams:

config:
rules:
- name: drop-internal-metrics
description: Drop metrics with internal prefix
conditions:
- 'IsMatch(name, "^internal\\.")'
context: metric
- name: drop-debug-datapoints
description: Drop datapoints marked as debug type
conditions:
- 'attributes["metric.type"] == "debug"'
context: datapoint

Example 6: Combined conditions with AND

Drop only when multiple conditions are true:

config:
rules:
- name: drop-debug-logs-from-test
description: Drop DEBUG logs from background-worker service in test environment
conditions:
- 'severity_text == "DEBUG"'
- 'resource.attributes["service.name"] == "background-worker"'
- 'resource.attributes["environment"] == "test"'
context: log

Example 7: Keep errors, drop everything else

Invert the logic to keep only valuable data:

config:
rules:
- name: drop-non-error-logs
description: Drop everything below ERROR severity level
conditions:
- "severity_number < 17"
context: log

Or use NOT logic:

filter/Logs:
description: "Drop non-errors"
config:
error_mode: ignore
rules:
- name: drop-non-error-logs
description: Drop logs that are not ERROR or FATAL
conditions:
- 'not (severity_text == "ERROR" or severity_text == "FATAL")'
context: log

Example 8: Pattern matching in log body

Drop logs containing specific patterns:

config:
rules:
- name: drop-health-check-logs
description: Drop logs with health check in body
conditions:
- 'IsMatch(body, ".*health check.*")'
context: log

Example 9: Drop high-volume, low-value spans

Remove spans that occur frequently but provide little value:

config:
rules:
- name: drop-fast-cache-hits
description: Drop cache hit operations faster than 1ms
conditions:
- 'attributes["db.operation"] == "get"'
- 'end_time_unix_nano - start_time_unix_nano < 1000000'
- 'attributes["cache.hit"] == true'
context: span

Example 10: Drop based on HTTP status

Filter successful requests, keep errors:

config:
rules:
- name: drop-successful-requests
description: Drop HTTP requests with status code less than 400
conditions:
- 'attributes["http.status_code"] < 400'
context: span

Example 11: Multiple conditions with OR

Drop if any condition matches:

config:
rules:
- name: drop-test-health-debug
description: Drop logs from test environment, health checks, or debug severity
conditions:
- 'resource.attributes["environment"] == "test"'
- 'IsMatch(body, ".*health.*")'
- 'severity_text == "DEBUG"'
context: log

Drop data vs drop attributes

The filter processor can drop entire records (as shown above) or drop specific attributes from records that are kept.

To drop attributes while keeping the record, you need to use the transform processor's delete_key() function, not the filter processor. The filter processor only drops entire records.

Wrong approach (this won't work):

filter/Logs:
config:
rules:
- name: wrong-attribute-drop
description: Identify and drop logs containing specific sensitive attributes
conditions:
- 'delete attributes["sensitive_field"]' # Filter conditions must be boolean, not actions
context: log

Correct approach (use transform processor instead):

transform/Logs:
description: "Remove sensitive attribute"
config:
rules:
- name: remove-sensitive-field
description: "Redacts the 'sensitive_field' attribute from log records to ensure privacy compliance."
statements:
- 'delete_key(attributes, "sensitive_field")'
output:
- nrexporter/newrelic

Performance considerations

  • Order matters: Place filter processors early in your pipeline to drop unwanted data before expensive processing
  • Combine conditions: Use and/or logic in a single expression rather than chaining multiple filter processors
  • Regex performance: IsMatch is more expensive than exact equality checks. Use == when possible.

Example of efficient ordering:

steps:
# ... receive steps ...
# ... probabilistic sampler steps ...
filter/Logs:
description: Apply drop rules and data processing for logs
output:
- transform/Logs
config:
error_mode: ignore
rules:
- name: drop the log records
description: drop all records which has severity text INFO
conditions:
- log.severity_text == "INFO"
context: log
filter/Metrics:
description: Apply drop rules and data processing for metrics
output:
- transform/Metrics
config:
error_mode: ignore
rules:
- name: drop-internal-metrics
description: drop internal metric
conditions:
- IsMatch(name, "^internal\\.")
context: metric
- name: drop-debug-datapoints
description: drop-debug-datapoints
conditions:
- attributes["metric.type"] == "debug"
context: datapoint
filter/Traces:
description: Apply drop rules and data processing for traces
output:
- transform/Traces
config:
error_mode: ignore
rules:
- name: drop-health-endpoint
description: drop-health-endpoint
conditions:
- attributes["http.path"] == "/health"
context: span
- name: drop-debug-events
description: drop-debug-events
conditions:
- name == 'debug_event'
context: span_event
# ... transformer steps...

OTTL boolean expression reference

For complete OTTL syntax and additional operators:

Next steps

Copyright © 2026 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.