Dynamic notifications don't route properly

Hi,

I have set up dynamic notifications via Events API v2 for a service, and I noticed that an alert labeled severity=critical was routed as Low-priority instead of High, which was the desired routing.

Alert labels (from Incidents):
Labels:
- alertname = TestAlert
- alertgroup = TestGroup
- environment = prod
- instance = watcher:13408
- job = test_job
- monitor = default-monitor
- severity = critical

I am using AlertManager with the following config:

    pagerduty_configs:
      - routing_key: routing-key
        send_resolved: true
        severity: '{{ template "template.pagerduty.severity" }}'
{{ define "template.pagerduty.severity" -}}
{{- if .Labels.severity }}
{{ .Labels.severity }}
{{- else -}}
warning
{{- end }}
{{- end }}

I have also checked that urgency mapping for critical is High.

Can you check the raw alert payload in PagerDuty and verify that the Event API v2 β€˜severity’ field is actually populated with one of [critical, error, warning, info]?

It seems this is the issue, I can see the label at .details.firing as a string value, but it is not passed as .severity.

{
  "client": "Alertmanager",
  "client_url": "https://<alertmanager-url>/#/alerts?receiver=pagerduty",
  "description": "
[FIRING:2 hosts] 
TestAlert

(TestGroup prod watcher:13408 critical)

",
  "event_type": "trigger",
  "incident_key": "91defe20d2a0e8a2396c20928c1884d4d63167c630aa7c9",
  "service_key": "<service-key>",
  "details": {
    "firing": "Labels:
- alertname = TestAlert
- alertgroup = TestGroup
- environment = prod
- instance = watcher:13408
- job = test_job
- monitor = default-monitor
- severity = critical

Annotations:
 - summary = Test metric please ignore
Source: <source-url>
",
    "num_firing": "2",
    "num_resolved": "0",
    "resolved": ""
  }
}

Following instructions at https://community.pagerduty.com/forum/t/pagerduty-and-prometheus-alert-manager-custom-details/2453/3?u=alexandros.orfanos managed to expose severity to Events API v2:

{
  "client": "Alertmanager",
  "client_url": "https://<alertmanager-url>/#/alerts?receiver=pagerduty",
  "contexts": [],
  "description": "
[FIRING:1 hosts] 
TestAlertWarn0 test-service

(production foo.example.com warning)

",
  "event_type": "trigger",
  "incident_key": "d320fac20734312d7decca90f0bbf4092104fa131d3fa9",
  "service_key": "<service-key>",
  "details": {
    "firing": "Labels:
 - alertname = TestAlertWarn0
 - environment = production
 - instance = foo.example.com
 - service = test-service
 - severity = warning
Annotations:
Source: https://<source-url>",
    "num_firing": "1",
    "num_resolved": "0",
    "resolved": "",
    "severity": "warning"
  }

However this alert too was routed as high-priority.

Update - Made it work, the correct config is as below:

alertmanager.yaml:

receivers:
  - name: pagerduty
    pagerduty_configs:
      - routing_key: <routing-key>
        severity: '{{ range .Alerts }}{{ .Labels.severity }}{{ end }}'

Correct, the configurations recommended by Alert Manager community are less than ideal. Glad you figured out how to populate the severity field.

You can do the same for the other PD-CEF event fields like this using other fields/labels that make sense. These will then show up as additional columns in your Alerts view in PagerDuty.

class: β€œ{{ range .Alerts }}{{ .Labels.alertname }}{{ end }}”
group: β€œ{{ range .Alerts }}{{ .Labels.namespace }}{{ end }}”
component: β€œ{{ range .Alerts }}{{ .Labels.pod }}{{ end }}”
severity: β€œ{{ range .Alerts }}{{ .Labels.severity }}{{ end }}”

1 Like