Camel K Monitoring

The Camel K monitoring architecture relies on Prometheus and the eponymous operator.

The Prometheus Operator serves to make running Prometheus on top of Kubernetes as easy as possible, while preserving Kubernetes-native configuration options.

Prerequisites

To take fully advantage of the Camel K monitoring capabilities, it is recommended to have a Prometheus Operator instance, that can be configured to integrate Camel K integrations.

Kubernetes

You can deploy the Prometheus operator by running:

$ kubectl apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/v0.38.0/bundle.yaml
Beware this installs the operator in the default namespace. You must download the file locally and replace the namespace fields to deploy the resources into another namespace.

Then, you can create a Prometheus resource, that the operator will use as configuration to deploy a managed Prometheus instance:

$ cat <<EOF | kubectl apply -f -
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
  name: prometheus
spec:
  serviceMonitorSelector:
    matchExpressions:
      - key: camel.apache.org/integration
        operator: Exists
EOF

By default, the Prometheus instance discovers applications to be monitored in the same namespace. You can use the serviceMonitorNamespaceSelector field from the Prometheus resource to enable cross-namespace monitoring. You may also need to specify a ServiceAccount with the serviceAccountName field, that’s bound to a Role with the necessary permissions.

OpenShift

Starting OpenShift 4.3, the Prometheus Operator, that’s already deployed as part of the monitoring stack, can be used to monitor application services. This needs to be enabled by following these instructions:

  1. Check whether the cluster-monitoring-config ConfigMap object exists in the openshift-monitoring project:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  2. If it does not exist, create it:

    $ oc -n openshift-monitoring create configmap cluster-monitoring-config
  3. Start editing the cluster-monitoring-config ConfigMap:

    $ oc -n openshift-monitoring edit configmap cluster-monitoring-config
  4. Set the techPreviewUserWorkload setting to true under data/config.yaml:

    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cluster-monitoring-config
      namespace: openshift-monitoring
    data:
      config.yaml: |
        techPreviewUserWorkload:
          enabled: true

On OpenShift versions prior to 4.3, or if you do not want to change your cluster monitoring stack configuration, you can refer to the Kubernetes section in order to deploy a separate Prometheus Operator instance.

Instrumentation

The Prometheus trait automates the configuration of integration pods to expose a metrics endpoint, that can be discovered and scrapped by a Prometheus server.

The Prometheus trait can be enabled when running an integration, e.g.:

$ kamel run -t prometheus.enabled=true ...

Alternatively, the Prometheus trait can be enabled globally once, by updating the integration platform, e.g.:

$ kubectl patch ip camel-k --type=merge -p '{"spec":{"traits":{"prometheus":{"configuration":{"enabled":"true"}}}}}'

The underlying instrumentation mechanism depends on the configured integration runtime. As a result, the set of registered metrics, as well as the naming convention they follow, also depends on it.

Main

When the default, a.k.a. main, runtime is configured for the integration, the JMX exporter is responsible for collecting and exposing metrics from JMX mBeans.

A custom configuration for the JMX exporter can be used by setting the prometheus.configmap parameter from the Prometheus trait with the name of a ConfigMap containing a prometheus-jmx-exporter.yaml key, e.g.:

$ kamel run -t prometheus.enabled=true -t prometheus.configmap=<jmx_exporter_config>...

Otherwise, the Prometheus trait uses a default configuration.

Quarkus

When the Quarkus runtime is configured for the integration, the Camel Quarkus MicroProfile Metrics extension is responsible for collecting and exposing metrics in the OpenMetrics text format.

The MicroProfile Metrics extension registers and exposes the following metrics out-of-the-box:

It is possible to extend this set of metrics by using either, or both:

Discovery

The Prometheus trait automatically configures the resources necessary for the Prometheus Operator to reconcile, so that the managed Prometheus instance can scrape the integration metrics endpoint.

By default, the Prometheus trait creates a ServiceMonitor resource, with the camel.apache.org/integration label, which must match the serviceMonitorSelector field from the Prometheus resource. Additional labels can be specified with the service-monitor-labels parameter from the Prometheus trait, e.g.:

$ kamel run -t prometheus.service-monitor-labels="label_to_be_match_by=prometheus_selector" ...

The creation of the ServiceMonitor resource can be disabled using the service-monitor parameter, e.g.:

$ kamel run -t prometheus.service-monitor=false ...

More information can be found in the Prometheus trait documentation.

The Prometheus Operator getting started guide documents the discovery mechanism, as well as the relationship between the operator resources.

In case your integration metrics are not discovered, you may want to rely on Troubleshooting ServiceMonitor changes.

Alerting

The Prometheus Operator declares the AlertManager resource that can be used to configure Alertmanager instances, along with Prometheus instances.

Assuming an AlertManager resource already exists in your cluster, you can register a PrometheusRule resource that is used by Prometheus to trigger alerts, e.g.:

$ cat <<EOF | kubectl apply -f -
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  labels:
    prometheus: example
    role: alert-rules
  name: prometheus-rules
spec:
  groups:
  - name: camel-k.rules
    rules:
    - alert: CamelKAlert
      expr: application_camel_context_exchanges_failed_total > 0
EOF

More information can be found in the Prometheus Operator Alerting user guide. You can also find more details in Creating alerting rules from the OpenShift documentation.

Autoscaling

Integration metrics can be exported for horizontal pod autoscaling (HPA), using the custom metrics Prometheus adapter. If you have an OpenShift cluster, you can follow Exposing custom application metrics for autoscaling to set it up.

Assuming you have the Prometheus adapter up and running, you can create a HorizontalPodAutoscaler resource, e.g.:

$ cat <<EOF | kubectl apply -f -
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: camel-k-autoscaler
spec:
  scaleTargetRef:
    apiVersion: camel.apache.org/v1
    kind: Integration
    name: example
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
      metric:
        name: application_camel_context_exchanges_inflight_count
      target:
        type: AverageValue
        averageValue: 1k
EOF

More information can be found in Horizontal Pod Autoscaler from the Kubernetes documentation.