• /
  • EnglishEspañolFrançais日本語한국어Português
  • 로그인지금 시작하기

NerdGraph tutorial: Stream your data to an AWS Kinesis Firehose, Azure Event Hub, or GCP Pub/Sub

With the streaming export feature available through Data Plus, you can send your data to AWS Kinesis Firehose, Azure Event Hub, or GCP Pub/Sub by creating custom rules using NRQL to specify which data should be exported. This guide explains how to create and update streaming rules using NerdGraph and view the existing rules. You can use the NerdGraph explorer to make these calls. Additionally, you have the option to compress the data before exporting with the Export Compression feature.

Here are some examples of how you can use the streaming export feature:

  • To populate a data lake
  • Enhance AI/ML training
  • Ensure long-term retention for compliance, legal, or security reasons

You can enable or disable streaming export rules at any time. However, be aware that streaming export only processes currently ingested data. If you disable and later re-enable the feature, any data ingested while it was off will not be exported. To export past data, you should use the Historical data export feature.

Requirements and limits

Limits on streamed data: The amount of data you can stream per month is limited by your total ingested data per month. If your streaming data amount exceeds your ingested data amount, we may suspend your access to and use of streaming export.

Permissions-related requirements:

You must have an AWS Kinesis Firehose, Azure Event Hub, or GCP Pub/Sub set up to receive New Relic data. If you haven't already done this, you can follow our steps below for AWS, Azure or GCP Pub/Sub.

NRQL requirements:

  • Must be flat queries, with no aggregation. For example, SELECT * or SELECT column1, column2 forms are supported.
  • Applicable for anything in the WHERE clause, except subqueries.
  • Query cannot have a FACET clause, COMPARE WITH, or LOOKUP.
  • Nested queries are not supported.
  • Supports data types stored in NRDB, and not metric timeslice data.

Prerequisites

Set up an AWS Kinesis Firehose

To set up streaming data export to AWS, you must first set up Amazon Kinesis Firehose. Follow these steps:

Create a Firehose for streaming export

Create a dedicated Firehose to stream your New Relic data to:

  1. Go to Amazon Kinesis Data Firehose.
  2. Create a delivery stream.
  3. Name the stream. You'll use this name later when registering the rule.
  4. Use Direct PUT or other sources and specify a destination compatible with New Relic's JSON event format (for example, S3, Redshift, or OpenSearch).

Create IAM Firehose write access policy

  1. Go to the IAM console and sign in with your user.
  2. In the left navigation, click Policies, and then click Create policy.
  3. Select the Firehose service, and then select PutRecord and PutRecordBatch.
  4. For Resources, select the delivery stream, add ARN, and select the region of your stream.
  5. Enter your AWS account number, and then enter your desired delivery stream name in the name box.
  6. Create the policy.

Create IAM role for granting New Relic write access

To set up the IAM role:

  1. Navigate to the IAM and click Roles.
  2. Create a role for an AWS account, and then select for another AWS account.
  3. Enter the New Relic export account ID: 8886xx727xx.
  4. Select Require external ID and enter the account ID of the New Relic account you want to export from.
  5. Click Permissions, and then select the policy you created above.
  6. Add a role name, which will be used during export registration, and provide a description.
  7. Create the role.

When you're done with these steps, you can set up your export rules using NerdGraph.

Set up an Azure Event Hub

To set up streaming data export to Azure, you must first set up an Event Hub. Follow these steps:

Alternatively, you can follow the Azure guide here.

Create an Event Hubs namespace

  1. From your Microsoft Azure account, navigate to Event Hubs.
  2. Follow the steps to create an Event Hubs namespace. We recommend enabling auto-inflate to ensure you receive all of your data.
  3. Ensure public access is enabled, as we will use a Shared Access Policy to securely authenticate with your Event Hub.
  4. Once your Event Hubs namespace is deployed, click Go to resource.

Create an Event Hub

  1. In the left column, click Event Hubs.

  2. To create an Event Hub, click +Event Hub.

  3. Enter the desired Event Hub Name. Save this, as you need it later to create the streaming export rule.

  4. For Retention, select Delete Cleanup policy and desired Retention time (hrs).

    중요

    Streaming export is currently not supported for Event Hubs with Compact retention policy.

  5. Once the Event Hub is created, click Event Hub.

Create and attach a shared access policy

  1. In the left column, go to Shared access policies.
  2. Click +Add near the top of the page.
  3. Choose a name for your shared access policy.
  4. Check Send, and click Create.
  5. Click the created policy, and copy the Connection string–primary key. Save this, as you need it later to authenticate and send data to your Event Hub.

When you're done with these steps, you can set up your export rules using NerdGraph.

Set up a GCP Pub/Sub

To set up streaming data export to GCP, you must first set up a Pub/Sub. Follow these steps:

Create a Pub/Sub topic

  1. Form your GCP Console, navigate to the Pub/Sub page.
  2. Click Create topic.
  3. Enter a topic ID and click Create.

Set up permissions on Pub/Sub

  1. In the right column of the created topic, click More actions.
  2. Select View permissions.
  3. Click Add Principal, and in the new principals box, enter the service account email provided by us:
    • US Region: us-prod-uds-streaming-export@h0c17c65df9291b526b433650e6a0a.iam.gserviceaccount.com
    • EU Region: eu-prod-uds-streaming-export@h0c17c65df9291b526b433650e6a0a.iam.gserviceaccount.com
  4. Under Assign roles section, search for Pub/Sub Publisher and click Save.

When you're done with these steps, you can set up your export rules using NerdGraph.

Understand export compression

You can choose to compress data before exporting it. This feature is off by default. Compressing can help prevent exceeding your data limit and lower outbound data costs.

You can enable compression using the payloadCompression field under ruleParameters. This field can be any of the following values:

  • DISABLED: Payloads will not be compressed before being exported. If unspecified, payloadCompression will default to this value.
  • GZIP: Compress payloads with the GZIP format before exporting.

GZIP is the only compression format currently available, though we may choose to make more formats available in the future.

When compression is enabled on an existing AWS export rule, the next message from Kinesis Firehose may contain both compressed and uncompressed data. This is due to buffering within Kinesis Firehose. To avoid this, you can temporarily disable the export rule before enabling compression, or create a new Kinesis Firehose stream for the compressed data alone to flow through.

If you do encounter this issue and you're exporting to S3 or another file storage system, you can view the compressed part of the data by following these steps:

  1. Manually download the object.
  2. Separate the object into two separate files by copying the compressed data into a new file.
  3. Decompress the new, compressed-only data file.

Once you have the compressed data, you can re-upload it to S3 (or whatever other service you're using) and delete the old file.

Please be aware that in S3 or another file storage system, objects may consist of multiple GZIP-encoded payloads that are appended consecutively. Therefore, your decompression library should have the capability to handle such concatenated GZIP payloads.

Automatic decompression in AWS

Once your data has arrived in AWS, you may want options to automatically decompress it. If you're streaming that data to an S3 bucket, there are two ways to enable automatic decompression:

Automatic decompression in Azure

If you're exporting data to Azure, it's possible to view decompressed versions of the objects stored in your event hub using a Stream Analytics Job. To do so, follow these steps:

  1. Follow this guide up to step 16.

    • On step 13, you may choose to use the same event hub as the output without breaking anything, but this is not recommended if you intend to proceed to step 17 and start the job, as this approach has not been tested.
  2. In the left pane of your streaming analytics job, click Inputs, then click the input you set up.

  3. Scroll down to the bottom of the pane that appears on the right, and configure the input with these settings:

    • Event serialization format: JSON
    • Encoding: UTF-8
    • Event compression type: GZip
  4. Click Save at the bottom of the pane.

  5. Click Query on the side of the screen. Using the Input preview tab, you should now be able to query the event hub from this screen.

Automatic decompression in GCP

In GCP Cloud Storage, objects will automatically decompress when downloaded if the metadata is set to Content-Encoding: gzip. For more details, check the GCP documentation.

Copyright © 2025 New Relic Inc.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.