Fetching New Relic Logs Using GraphQL API & Java (Step-by-Step Guide)
When dealing with large log data in New Relic, extracting insights efficiently is crucial. This guide walks you through a step-by-step approach to fetch logs using GraphQL, paginate with 30-minute intervals, and save the results in a CSV file using Java.
🚀 Step 1: Setup New Relic GraphQL API Access
To fetch logs, you’ll need:
- New Relic Account ID (Find this in your New Relic account settings)
- API Key (Generate it from New Relic under API Credentials)
- GraphQL Endpoint:
https://api.newrelic.com/graphql
📝 Step 2: Define Configurable Parameters
To make the script flexible, use configurable parameters for:
- NRQL Query Parameters (entity name, message filter, etc.)
- Time Range (SINCE/UNTIL)
- Batch Size
- Output File Path
📌 Step 3: Java Implementation
Here’s the complete Java code to fetch logs, paginate in 30-minute intervals, and save results to a CSV file.
import okhttp3.*;
import org.json.JSONArray;
import org.json.JSONObject;
import java.io.FileWriter;
import java.io.IOException;
public class NewRelicLogFetcher {
// Configuration Variables
private static final String API_URL = "https://api.newrelic.com/graphql";
private static final String API_KEY = "YOUR_NEW_RELIC_API_KEY"; // Replace with your API Key
private static final int ACCOUNT_ID = YOUR_ACCOUNT_ID; // Replace with your Account ID
private static final int BATCH_SIZE = 1000; // Number of records to fetch per query
private static final String CSV_FILE_PATH = "newrelic_logs.csv";
private static final String ENTITY_NAME = "name of the entity";
private static final String MESSAGE_FILTER = "Failed to save the data";
public static void main(String[] args) {
long startTime = 1740312699548L; // Replace with actual timestamp (SINCE)
long endTime = 1740327729164L; // Replace with actual timestamp (UNTIL)
fetchAndSaveLogsToCSV(startTime, endTime);
}
public static void fetchAndSaveLogsToCSV(long startTime, long finalEndTime) {
long interval = 30 * 60 * 1000; // 30-minute interval in milliseconds
try (FileWriter csvWriter = new FileWriter(CSV_FILE_PATH)) {
csvWriter.append("UUID\n");
while (startTime < finalEndTime) {
long endTime = Math.min(startTime + interval, finalEndTime);
System.out.println("Fetching logs from " + startTime + " to " + endTime);
String query = generateGraphQLQuery(startTime, endTime);
String response = executeGraphQLQuery(query);
if (response == null) break;
JSONArray results = new JSONObject(response)
.getJSONObject("data")
.getJSONObject("actor")
.getJSONObject("account")
.getJSONObject("nrql")
.getJSONArray("results");
for (int i = 0; i < results.length(); i++) {
JSONObject logEntry = results.getJSONObject(i);
if (logEntry.has("substring.message")) {
csvWriter.append(logEntry.getString("substring.message")).append("\n");
}
}
System.out.println("Fetched and saved " + results.length() + " logs.");
startTime = endTime;
}
System.out.println("✅ Data saved successfully in: " + CSV_FILE_PATH);
} catch (IOException e) {
e.printStackTrace();
}
}
private static String generateGraphQLQuery(long since, long until) {
return "{ \"query\": \"{ actor { account(id: " + ACCOUNT_ID + ") { nrql(query: \\\"SELECT substring(message, 46) FROM Log WHERE entity.name = '" + ENTITY_NAME + "' AND message LIKE '%" + MESSAGE_FILTER + "%' SINCE " + since + " UNTIL " + until + " LIMIT " + BATCH_SIZE + "\\\") { results } } } }\" }";
}
private static String executeGraphQLQuery(String query) {
OkHttpClient client = new OkHttpClient();
RequestBody body = RequestBody.create(query, MediaType.parse("application/json"));
Request request = new Request.Builder()
.url(API_URL)
.addHeader("Content-Type", "application/json")
.addHeader("API-Key", API_KEY)
.post(body)
.build();
try (Response response = client.newCall(request).execute()) {
if (!response.isSuccessful()) {
System.out.println("Error fetching logs: " + response);
return null;
}
return response.body().string();
} catch (IOException e) {
e.printStackTrace();
return null;
}
}
}
Add the dependency which will be needed to run the above(build.gradle)
dependencies {
implementation("com.squareup.okhttp3:okhttp:4.9.3")
implementation 'org.json:json:20210307'
implementation 'com.opencsv:opencsv:5.7.1'
}
📂 Step 4: Run the Script
Compile and run the script:
javac NewRelicLogFetcher.java
java NewRelicLogFetcher
The logs will be saved in newrelic_logs.csv
.
🔍 Key Features & Benefits
✅ 30-Minute Interval Pagination — Ensures all logs are fetched without NRQL query failures.
✅ No OFFSET Usage — Avoids pagination issues after 5,000 records.
✅ UUID Extraction — Extracts only the required log values (UUIDs).
✅ Configurable NRQL Query — Modify parameters easily without code changes.
✅ Efficient & Scalable — Fetches large log datasets in manageable chunks.
🎯 Final Thoughts
This method is highly effective for fetching large-scale New Relic logs, especially when dealing with GraphQL pagination issues. You can customize it further to suit your needs!
💡 Next Steps
- Enhance Error Handling — Handle API failures and rate limits.
- Multi-Threading — Fetch data in parallel for improved performance.
- Push to Cloud Storage — Save logs to AWS S3 or Google Cloud Storage.
Let me know your thoughts and improvements in the comments! 🚀