Monitor your self-hosted Primary Site using Prometheus.
The Primary Site services expose various application metrics that Prometheus can scrape and ingest.
Option 1: Use an existing integration
Deploy Prometheus using one of the major cloud providers' managed integrations:
- GCP – Google Cloud Managed Service for Prometheus
- AWS – Amazon Managed Service for Prometheus
- Azure – Azure Monitor Managed Service for Prometheus
Option 2: Deploy to cluster directly
Alternatively, deploy Prometheus to your cluster directly.
Add Prometheus Helm repositories
$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts $ helm repo update
Create a namespace for Prometheus
$ kubectl create namespace prometheus
Install Prometheus to your cluster
$ helm install prometheus prometheus-community/prometheus -n prometheus
Configure pods for scraping
inboxListener sections in your
values.yaml file, add a pod annotation under the "deployment" subsection instructing Prometheus to scrape the pod:
siteController: deployment: podAnnotations: prometheus.io/scrape: true
Upgrade your deployment with the new configuration. You may need to alter this command with your own namespace, Helm release name, and file path):
$ helm upgrade primary-site -f values.yaml --namespace foxglove
Prometheus will now scrape the metrics endpoints of the deployed services.
To confirm this is working, forward the port of the Prometheus UI to view collected metrics:
$ kubectl -n prometheus port-forward service/prometheus-server 9090:80
Visit http://localhost:9090 and enter the query
Executing that should show a list of metrics scraped from the edge controller.
Install the Prometheus custom metrics adapter
This assumes you have installed Prometheus into the "prometheus" namespace. If you are using a different namespace, replace the second component of the URL accordingly.
$ helm install prometheus-custom-metrics-adapter prometheus-community/prometheus-adapter -n prometheus --set prometheus.url=http://prometheus-server.prometheus.svc.cluster.local
After a couple minutes, you should see custom metrics:
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
If the output contains metrics, you are ready to create an autoscaler on custom metrics.
Once Prometheus is configured, the services expose a number of application metrics that may be useful to monitor.
stream_sync_ms- Time between reception of request and establishment of connections to all immediately-required objects in storage (histogram)
time_to_first_message_ms- Time between reception of request and first message written out (histogram)
files_request_latency_ms- Time to request list of data files from Foxglove API (histogram)
import_success_count- Number of imports successfully processed (counter)
import_quarantine_count- Number of imports quarantined (counter)
import_input_size_bytes- Size of input files in bytes (histogram)
import_output_size_bytes- Size of output files (processed data files) in bytes (histogram)
import_processing_time_seconds- Processing time for imports in seconds (histogram)
output_file_count- Number of output files per import (histogram)
input_message_count- Number of messages per import (histogram)
unleased_pending_import_count- Number of backlogged pending imports for processing (gauge)
oldest_unprocessed_pending_import_age_secs- Age of oldest unprocessed pending import in seconds (gauge)