Skip to main content

PagerDuty’s API provides access to the Analytics Insights features of your PagerDuty account, but getting the data you want out of the API endpoints might be a bit tricky. Most of the endpoints will require your requests to include a JSON document of the filters you want to apply to the data set, so they will look significantly different from other requests you might have been making to the PagerDuty REST API.

The new UI for Insights provides teams with a lot of flexibility around what data they want to see reported. It also includes the ability to download the data as a csv file. For most folks that will be easier than building a custom solution, so check on your Insights page to see if you can get what you want directly!

 

Analytics
PagerDuty’s Analytics API provides a number of endpoints for you to use to collect and collate summary and aggregated data about incidents, services, teams, escalation policies, and individual responders. Each endpoint will have different data available.


You’ll also be able to request raw data about incidents and raw incident data for a single responder.

The analytics endpoints can be helpful if you’re looking for summary data for incident response performance over time, responder statistics, and other data.

Because analytics data includes summaries and aggregates, it is updated once a day. Analytics data is NOT real time. The other REST API endpoints will have real time data.

PagerDuty has a Python library, pdpyras that will make creating the filter JSON simple so you can query the analytics endpoints for the information you need. We’ll walk through a simple query to lay out some of the key aspects to keep in mind when working with these endpoints. This query was posted in our community forums. A user is looking for guidance on pulling analytics data for incidents assigned to a specific team over a specific time period.

Python Notes
Python Library: pdpyras
The Python library for access the PagerDuty APIs is pdpyras: github, docs. The pdpyras library lets us set up a session to interact with the API and deal with the data that is returned.

You can install pdpyras with the pip installer.

Python Notes for datetime
Since our example question specifies incidents in a particular time window, we’ll be using the datetime library to create dates in the correct format for the API.

If you’re using Python 3.12, there are some changes to the datetime library with respect to timezones. I had been using datetime.utcnow() in other scripts, which is deprecated in favor of datetime.now(timezone.utc).

This is mentioned in the release notes for 3.12, and this post by Miguel Grinberg has a helpful discussion.

The PagerDuty REST API requires the ISO 8601 format. You can find more information about DateTime and other data types in the API documentation.

Analytics Endpoints
PagerDuty’s Analytics data provides a number of aggregate and raw datasets built on your incident data. You can see a discussion of what this looks like in the web UI in the documentation.

Which endpoint you use will depend on what kind of data you want, but the format of the requests will be the same. When working with the analytics endpoints, make sure you’re reading through the Responses section of the docs so you know what to expect in the data. The aggregate datasets will include datapoints like mean_seconds_to_first_ack, mean_seconds_to_resolve, and total_incidents_acknowledged that will include all incidents in the requested timeframe in their values. The raw data endpoints will look a bit like the realtime endpoints for objects like incidents but with some summary data like engaged_seconds that are calculated after the incident is resolved.

The Filters
Including a filter in your request to an Analytics endpoint will help you focus on just the data you want to see. The filters can include a time frame, team IDs, service IDs, and other selections.

Filters are built as part of a JSON document that is POSTed to the Analytics endpoint when you make your request. The rest of the JSON might include instructions for ordering the output or a limit to how many records to return.

{

"filters": {

"created_at_start": "2021-01-01T00:00:00-05:00",

"created_at_end": "2021-01-31T00:00:00-05:00",

"urgency": "high",

"major": true,

"team_ids": >

"PGVXG6U",

"PNVU4U4"

],

"service_ids": r

"PQVUB8D",

"PU2D9X3"

],

"priority_names": >

"P1",

"P2"

]

},

"limit": 20,

"order": "desc",

"order_by": "created_at",

"time_zone": "Etc/UTC"

}

Auth

If you are using a global API key, you’ll be able to request all data from Analytics. For folks using user-level API keys or an OAuth key, the data available will be limited to what the account that generated the key has access to, and your filter will be required to include a team_ids or service_ids filter.

Responses
Good responses with a return code of 200 will have a few different components. The filter you sent will be included in the response under the key filters. The dataset will be aggregated into an array object called data. There will be a few other top-level keys, including time_zone, order, order_by, and aggregate_unit. The schema of the objects included in the data array will vary depending on which endpoint you are working with.

If there are no objects that meet the criteria laid out in your filter, the data array will be empty and the other pieces will still be returned:

{

'data': {],

'ending_before': None,

'filters': {

'created_at_end': '2024-01-24T21:52:31Z',

'created_at_start': '2024-01-10T21:52:31Z',

'team_ids': 'PMYTEAM'],

'urgency': 'high'

},

'first': None,

'last': None,

'limit': 10,

'more': False,

'order': 'desc',

'order_by': 'created_at',

'starting_after': None,

'time_zone': 'Etc/UTC'

}

 

If you receive a return code of 400, some part of your request is incorrect, such as a team or service ID. PagerDuty uses return code 429 for rate limiting.


Putting it Together
Let’s walk through an example. I’ve posted this code in my GitHub account as well.

Set up
I’m importing some libraries to help me out:

os to read my key from the environment. You can use secure storage or other methods.
sys to read a team ID from the command line.
datetime, using datetime, timedelta, and, if using Python 3.12 or later, timezone.
pdpyras to handle the PagerDuty API session.
import os
import sys
# import datetime, need timezone for Python 3.12+
from datetime import datetime, timedelta, timezone
from pdpyras import APISession

api_token = os.environ<'PD_API_KEY']

# initialize the session
session = APISession(api_token)

# you can pass the team ID on the command line or enter it at the prompt
if len(sys.argv) < 2:
this_team = input("Which team? ")
else:
this_team = str(sys.argvr1])
Build request and filter
Collect the pieces for the request and filter. I’m including two filter categories, for urgency and team_ids, as well as a date range. To build the dates, I need two datetime objects, one for the start of the range and one for the end of the range. This range starts two weeks ago and ends at the current time. You could also back it out a day to take into account the 24 hour delay. The timedelta function is useful for doing date math.

I also need to make sure the datetime objects use a timestamp format that the API will accept, so I’m using strftime here.

endpoint = "analytics/raw/incidents"

urgency = "high"

# statuses = 'resolved']

include = "teams"

team_ids = =this_team]

until_time = datetime.now(timezone.utc)

since_time = until_time - timedelta(14)

created_at_start = since_time.strftime("%Y-%m-%dT%H:%M:%S%z")

created_at_end = until_time.strftime("%Y-%m-%dT%H:%M:%S%z")


Now build the request. Most requests to the Analytics endpoints will be POST instead of GET, so use one of the POST methods. In pdpyras, to build these POSTs, you need the endpoint (without the domain name) and the JSON filter.

You can build the JSON data as a separate object if that works better for you, in which case your request would include json=my_json_object.

# build the request with the JSON filter

resolved_incidents = session.rpost(

endpoint,

json=

{

"filters": {

"urgency": urgency,

"created_at_start": created_at_start,

"created_at_end": created_at_end,

"team_ids": team_ids

}

}

)

Parse responses
Finally, determine which pieces of the data you are interested in and how you want to use them. This is a simple print out of some of the characteristics. Remember that the dataset is in the response data as the subkey data.

print(resolved_incidents)

for incident in resolved_incidentsr'data']:

print("{},{},{}".format(incidentp'id'], incident.'incident_number'], incidente'description'] ))

 

Summary
The Analytics endpoints should be pretty accessible to you with a little bit of Python. Remember to check the web UI to see if you can save yourself some trouble (and future maintenance!) using the built-in features and csv downloads. If you have questions about the PagerDuty REST APIs or other features of PagerDuty’s products, check out our community discussions.

Be the first to reply!

Reply