• /
  • EnglishEspañol日本語한국어Português
  • Log inStart now

Update New Relic OTLP attribute processing

We’re adjusting our OTLP ingest validation logic to have more lenient attribute processing.

The New Relic OTLP endpoint performs a variety of validation on attributes. There are attribute limits constraining things like the length of keys and values, as well as additional constraints on value types for edge cases which are possible to express via the protobuf message definitions, but which are not typically encountered in practice, including heterogeneous arrays (i.e. an array with a mixture of value types like strings and integers), nested arrays (i.e. an array of arrays) and more.

Currently, the New Relic OTLP endpoint is strict with validation. In some cases, we prune the problematic attribute silently, but for most validation issues, we drop the entire record when a single attribute is invalid.

This strict validation is one of the most common pain points customers encounter sending OTLP data to New Relic. Luckily, there’s an easy fix:

We’re adopting a lenient attribute processing posture. When an attribute is invalid, we’ll prune or modify that attribute and retain the record. In severe cases, we may still drop records when there is no intuitive way to retain it. Whenever we prune or modify an attribute, or drop a record, we’ll create an NrIntegrationError to help track down and fix the problem at the source.

We’ll be rolling out this change on June 2, 2025.

How will this affect me

We believe this will be a welcome relief in almost all cases. Our current strict validation frequently yields missing data, which can be painful to track down and diagnose, especially in environments with a large number of deployments managed by many individual teams. With this change, the New Relic OTLP endpoint will better embody our philosophy of “store what you send”.

However, given New Relic’s usage based pricing model, this change means that records that were previously dropped will now be stored and contribute to data usage for your account.

If you’re following our OTLP endpoint documentation on attribute limits, and your data’s attributes conform to the OpenTelemetry standard attribute definition, there is nothing to do. Conforming to these constraints means that no data is currently being dropped, and therefore no additional data will be stored.

To check if any data in your account is currently being dropped due to attribute validation, run the following NRQL query:

FROM NrIntegrationError SELECT * WHERE message like 'One or more OTLP data point(s) was dropped%'

If this query returns results, you may see a change in usage and therefore billing following this change. The exact amount depends on the frequency with which records violate the limits. Specifically, the following cases currently result in dropped records and will be adjusted as described below:

  • Attribute name exceeds the 255 character length limit. The attribute name will be truncated and retained.
  • Attribute string value exceeds the 4095 character length limit. The attribute value will be truncated and retained.
  • Attribute byte array exceeds the 128k limit on byte array. The attribute will be pruned and the record retained.
  • Array length exceeds the 64 entry limit. The attribute will be pruned and the record retained.

NOTE: The most common validation issue we see is attribute string values exceeding the 4095 character limit. Changing the validation from dropping records with long attribute values to truncating long attributes and retaining the record may cause a notable uptick in data usage.

See mitigation for advice on how to offset any additional data usage you may incur.

Mitigation

Part of OpenTelemetry’s core value proposition is in providing tools to take control of your telemetry data pipeline. As such, there are a variety of tools available to help mitigate a change in data usage. For a full discussion, see Manage OpenTelemetry data ingest volume. A few strategies are particularly relevant here:

Truncate long attributes

Leveraging the collector transform processor and the truncate_all editor, truncate all attributes to some known limit, as demonstrated here. This is what the New Relic OTLP endpoint will be doing after this change. However, you can leverage this technique to offset a change in usage by setting a lower limit than New Relic platform limits. For example, you could set a limit of 1000 if you think you’ll only need the first 1000 characters for your observability use case.

transform:
trace_statements:
- truncate_all(span.attributes, 4095)
- truncate_all(resource.attributes, 4095)
log_statements:
- truncate_all(log.attributes, 4095)
- truncate_all(resource.attributes, 4095)
metric_statements:
- truncate_all(datapoint.attributes, 4095)
- truncate_all(resource.attributes, 4095)

Drop offending attributes

Leveraging the collector transform processor and the delete_key editor, delete attributes which are not valuable:

transform:
trace_statements:
- delete_key(span.attributes, "attribute.key.to.drop")
log_statements:
- delete_key(log.attributes, "attribute.key.to.drop")
metric_statements:
- delete_key(datapoint.attributes, "attribute.key.to.drop")

You may choose to drop attribute keys which are particularly long and which are therefore outsized contributors to usage, or attributes which are short but simply not typically useful. For reference, the following list summarizes the 10 most common attributes that violate the attribute value length limit:

  • exception.stactrace
  • other - a catch-all for custom attributes not defined in the OpenTelemetry semantic conventions
  • db.statement
  • process.command_line
  • graphql.document
  • db.operation
  • db.query.text
  • http.url
  • exception.message
  • http.target

Send fewer records with sampling

Offset the additional data by adjusting your sampling rate using any of the strategies discussed here.

Copyright © 2025 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.