Kubernetes API support

Tharanga Rajapaksha
10 min readJan 14, 2019

Kubernetes API support can be used to monitor and control pods or your services. There are some tools which make this process so easy.

Discovering builtin services

Typically, there are several services which are started on a cluster by kube-system. Get a list of these with the kubectl cluster-infocommand:

Ex :

$ kubectl cluster-info

Kubernetes master is running at https://104.197.5.247
elasticsearch-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy
kibana-logging is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kibana-logging/proxy
kube-dns is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/kube-dns/proxy
grafana is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
heapster is running at https://104.197.5.247/api/v1/namespaces/kube-system/services/monitoring-heapster/proxy
Local machine cluster-info details

You may use the url given in the result from above command to view some information about the cluster.

Ex : https://localhost:6443/api/v1

This shows the proxy-verb URL for accessing each service. For example, this cluster has cluster-level logging enabled (using Elasticsearch), which can be reached at https://104.197.5.247/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/if suitable credentials are passed, or through a kubectl proxy at, for example:http://localhost:8080/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy/. (See Access Clusters Using the Kubernetes API for how to pass credentials or use kubectl proxy.)

Manually constructing apiserver proxy URLs

As mentioned above, you use the kubectl cluster-infocommand to retrieve the service’s proxy URL. To create proxy URLs that include service endpoints, suffixes, and parameters, you simply append to the service’s proxy URL:http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/[https:]service_name[:port_name]/proxy

If you haven’t specified a name for your port, you don’t have to specify port_namein the URL

First lets learn about basic of K8s API support.

When accessing the Kubernetes API for the first time, use the Kubernetes command-line tool, kubectl.

To access a cluster, you need to know the location of the cluster and have credentials to access it. Typically, this is automatically set-up when you work through a Getting started guide, or someone else setup the cluster and provided you with credentials and a location.

Check the location and credentials that kubectl knows about with this command:

$ kubectl config view

Directly accessing the REST API

kubectl handles locating and authenticating to the API server. If you want to directly access the REST API with an http client likecurlor wget, or a browser, there are multiple ways you can locate and authenticate against the API server:

  1. Run kubectl in proxy mode (recommended). This method is recommended, since it uses the stored apiserver location and verifies the identity of the API server using a self-signed cert. No man-in-the-middle (MITM) attack is possible using this method.
  2. Alternatively, you can provide the location and credentials directly to the http client. This works with client code that is confused by proxies. To protect against man in the middle attacks, you’ll need to import a root cert into your browser.

Run kubectl in proxy mode

kubectl proxy --port=8888 &
Run kubectl in proxy mode

Now we can use any http client for view the rusult

Use curl

Use web browser

Ex 1:

http://localhost:8888/

Will give you the all possible api

Possible API combinations.

EX 2 :

Then suport you use http://localhost:8888/healthz

API calling and result on browser.

Ex 3 :

metrics is another useful API in the list.

/metrics api calling and the result.

EX 4 :

http://localhost:8888/swaggerapi

EX 5 :

Get the all services running in our cluster.

You can type after api version anything you want to see.

Ex 5 :

Instead of services you may use pods, or its other apis

// get all pods details listed (PodList)

http://localhost:8888/api/v1/pods

we can see selflink in metadata in email pod

selfLink"/api/v1/namespaces/default/pods/email-6f46b7cfbb-jtz6r"

Ex 6 :

// get the requied pod details using selfLink

http://localhost:8888/api/v1/namespaces/default/pods/email-6f46b7cfbb-jtz6r

Prometheus

But normal users are expected better visualization and alert management also. Because using API in not much user friendly. Prometheus is an open source monitoring framework.

Above web site give downloadable example with clear matters regarding prometheus implementations. Following steps are directly taken from that image.

Latest Prometheus is available as a docker image in its official docker hub account. We will use that image for the setup.

git clone https://github.com/bibinwilson/kubernetes-prometheus

Hear after i am using file contents given in there web site and downloaded github repository.

Create a namespace

First, we will create a Kubernetes namespace for all our monitoring components. Execute the following command to create a new namespace called monitoring.

kubectl create namespace monitoring

You need to assign cluster reader permission to this namespace so that Prometheus can fetch the metrics from kubernetes API’s. Technically we do this by creating cluster roll and then roll binding to this namespace. Follwing “clusterRole.yaml” file shows how it is done.

apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata:name: prometheusrules:- apiGroups: [""]resources:- nodes- nodes/proxy- services- endpoints- podsverbs: ["get", "list", "watch"]- apiGroups:- extensionsresources:- ingressesverbs: ["get", "list", "watch"]- nonResourceURLs: ["/metrics"]verbs: ["get"]---apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRoleBindingmetadata:name: prometheusroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: prometheussubjects:- kind: ServiceAccountname: defaultnamespace: monitoring

Then exicute this clusterroll creation by kubectl

kubectl create -f clusterRole.yaml

Configuration

We should create a config map with all the prometheus scrape config and alerting rules, which will be mounted to the Prometheus container in /etc/prometheus as prometheus.yaml and prometheus.rules files. The prometheus.yaml contains all the configuration to dynamically discover pods and services running in the kubernetes cluster. prometheus.rules will contain all the alert rules for sending alerts to alert manager.

apiVersion: v1kind: ConfigMapmetadata: name: prometheus-server-conflabels: name: prometheus-server-conf namespace: monitoringdata:prometheus.rules: |-groups:- name: devopscube demo alertrules:- alert: High Pod Meoryexpr: sum(container_memory_usage_bytes) > 1for: 1mlabels:severity: slackannotations:summary: High Memory Usageprometheus.yml: |-global:scrape_interval: 5sevaluation_interval: 5srule_files:- /etc/prometheus/prometheus.rulesalerting:alertmanagers:- scheme: httpstatic_configs:- targets:- "alertmanager.monitoring.svc:9093"scrape_configs:- job_name: 'kubernetes-apiservers'kubernetes_sd_configs:- role: endpointsscheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenrelabel_configs:- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]action: keepregex: default;kubernetes;https- job_name: 'kubernetes-nodes'scheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenkubernetes_sd_configs:- role: noderelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.+)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.+)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics- job_name: 'kubernetes-pods'kubernetes_sd_configs:- role: podrelabel_configs:- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]action: replaceregex: ([^:]+)(?::\d+)?;(\d+)replacement: $1:$2target_label: __address__- action: labelmapregex: __meta_kubernetes_pod_label_(.+)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_pod_name]action: replacetarget_label: kubernetes_pod_name- job_name: 'kubernetes-cadvisor'scheme: httpstls_config:ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crtbearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/tokenkubernetes_sd_configs:- role: noderelabel_configs:- action: labelmapregex: __meta_kubernetes_node_label_(.+)- target_label: __address__replacement: kubernetes.default.svc:443- source_labels: [__meta_kubernetes_node_name]regex: (.+)target_label: __metrics_path__replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor- job_name: 'kubernetes-service-endpoints'kubernetes_sd_configs:- role: endpointsrelabel_configs:- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]action: keepregex: true- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]action: replacetarget_label: __scheme__regex: (https?)- source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]action: replacetarget_label: __address__regex: ([^:]+)(?::\d+)?;(\d+)replacement: $1:$2- action: labelmapregex: __meta_kubernetes_service_label_(.+)- source_labels: [__meta_kubernetes_namespace]action: replacetarget_label: kubernetes_namespace- source_labels: [__meta_kubernetes_service_name]action: replacetarget_label: kubernetes_name

Create the config map in kubernetes.

kubectl create -f config-map.yaml -n monitoring

Prometheus Deployment

Now time to create prometheus deployment with above config map. In this configuration, we are mounting the Prometheus config map as a file inside /etc/prometheus. It uses the official Prometheus image from docker hub.

“prometheus-deployment.yaml” file is given bellow.

apiVersion: extensions/v1beta1kind: Deploymentmetadata:name: prometheus-deploymentnamespace: monitoringspec:replicas: 1template:metadata:labels:app: prometheus-serverspec:containers:- name: prometheusimage: prom/prometheus:v2.2.1args:- "--config.file=/etc/prometheus/prometheus.yml"- "--storage.tsdb.path=/prometheus/"ports:- containerPort: 9090volumeMounts:- name: prometheus-config-volumemountPath: /etc/prometheus/- name: prometheus-storage-volumemountPath: /prometheus/volumes:- name: prometheus-config-volumeconfigMap:defaultMode: 420name: prometheus-server-conf- name: prometheus-storage-volumeemptyDir: {}

Prometheus deployment stat with monitoring namespace.

kubectl create  -f prometheus-deployment.yaml --namespace=monitoring

Once done you can check your deployment by following command.

kubectl get deployments --namespace=monitoring

Connecting To Prometheus

You can connect to the deployed Prometheus in two ways.

  1. Using Kubectl port forwarding
  2. Exposing the Prometheus deployment as a service with NodePort or a Load Balancer.

Use bellow command and find out the your prometheus pod id.

kubectl get pods --namespace=monitoring
see the prometheus pod name

Execute the following command with your pod name to access Prometheus from localhost port 8989.

kubectl port-forward prometheus-deployment-67d56fb57f-j5ldz 8989:9090 -n monitoring

Now you can access it from browser with 8989 port.

http://localhost:8989

Exposing Prometheus as a Service

To access the Prometheus dashboard over a IP or a DNS name, you need to expose it as kubernetes service.

1. Create a file named prometheus-service.yaml and copy the following contents. We will expose Prometheus on all kubernetes node IP’s on port 30000.

apiVersion: v1kind: Servicemetadata:name: prometheus-servicespec:selector:app: prometheus-servertype: NodePortports:- port: 8080targetPort: 9090nodePort: 30000

Note: If you are on AWS or Google Cloud, You can use Loadbalancer type, which will create a load balancer and points it to the service.

2. Create the service using the following command.

kubectl create -f prometheus-service.yaml --namespace=monitoring

3. Once created, you can access the Prometheus dashboard using any Kubernetes node IP on port 30000. If you are on the cloud, make sure you have the right firewall rules for accessing the apps.

4. Now if you go to status –> Targets, you will see all the Kubernetes endpoints connected to Prometheus automatically using service discovery as shown below. So you will get all kubernetes container and node metrics in Prometheus.

Alert Management

AlertManager is an opensource alerting system which works with Prometheus Monitoring system. Now time to Alert Manager setup and its integration with Prometheus.

prometheus alert management

Prometheus should have the correct alert manager service endpoint in its config.yaml as shown below. Only then, Prometheus will be able to send the alert to Alert Manager.

All row files contents can be found from the url given bellow. This site is the original source of this content.

https://devopscube.com/alert-manager-kubernetes-guide/

Ex :

alerting:alertmanagers:- scheme: httpstatic_configs:- targets:- "alertmanager.monitoring.svc:9093"

All the alerting rules have to be present on Prometheus config based on your needs. It should be created as part of the Prometheus config map with a file named prometheus.rules and added to the config.yaml in the following way.

rule_files:       - /etc/prometheus/prometheus.rules
  • Alerts can be written based on the metrics you receive on Prometheus.
  • For receiving emails for alerts, you need to have a valid SMTP host in the alert manager config.yaml (smarthost prameter). You can customize the email template as per your needs in the Alert Template config map. We have given the generic template in this guide.

Sampale AlertManagerConfigmap.yaml file.

kind: ConfigMapapiVersion: v1metadata:name: alertmanager-confignamespace: monitoringdata:config.yml: |-global:templates:- '/etc/alertmanager/*.tmpl'route:receiver: alert-emailergroup_by: ['alertname', 'priority']group_wait: 10srepeat_interval: 30mroutes:- receiver: slack_demo# Send severity=slack alerts to slack.match:severity: slackgroup_wait: 10srepeat_interval: 1mreceivers:- name: alert-emaileremail_configs:#- to: demo@devopscube.com- to: tharanga.rajapaksha@gmail.comsend_resolved: false#from: from-email@email.com#smarthost: email-host-herefrom: tharanga.rajapaksha@gmail.comsmarthost: smtp.gmail.comrequire_tls: false- name: slack_demoslack_configs:- api_url: https://hooks.slack.com/services/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sz4Qkchannel: '#devopscube-demo'

Let’s create the config map using kubectl.

kubectl create -f AlertManagerConfigmap.yaml

Config Map for Alert Template

We need alert templates for all the receivers we use (email, slack etc). Alert manager will dynamically substitute the values and delivers alerts to the receivers based on the template. You can customize these templates based on your needs.

Create a file named AlertManagerConfigmap.yaml and copy the contents from this file link ==> Alert Manager Template YAML

https://raw.githubusercontent.com/devopscube/kubernetes-alert-manager/master/AlertManagerConfigmap.yaml

Above link has provided sample template file just copy the content to above given file name and create a config map.

Create the configmap using kubectl.

kubectl create -f AlertTemplateConfigMap.yaml

Create a Deployment

In this deployment, we will mount the two config maps we created.

Create a file called Deployment.yaml with the following contents.

apiVersion: apps/v1kind: Deploymentmetadata:name: alertmanagernamespace: monitoringspec:replicas: 1selector:matchLabels:app: alertmanagertemplate:metadata:name: alertmanagerlabels:app: alertmanagerspec:containers:- name: alertmanagerimage: prom/alertmanager:latestargs:- "--config.file=/etc/alertmanager/config.yml"- "--storage.path=/alertmanager"ports:- name: alertmanagercontainerPort: 9093volumeMounts:- name: config-volumemountPath: /etc/alertmanager- name: templates-volumemountPath: /etc/alertmanager-templates- name: alertmanagermountPath: /alertmanagervolumes:- name: config-volumeconfigMap:name: alertmanager-config- name: templates-volumeconfigMap:name: alertmanager-templates- name: alertmanageremptyDir: {}

Now create the deployment using kebectl.

kubectl create -f Deployment.yaml

Create a Service

We need to expose the alert manager using NodePort or Load Balancer just to access the Web UI. Prometheus will talk to alert manager using the internal service endpoint.

Create a Service.yaml file with the following contents.

apiVersion: v1kind: Servicemetadata:name: alertmanagernamespace: monitoringannotations:prometheus.io/scrape: 'true'prometheus.io/path:   /prometheus.io/port:   '8080'spec:selector:app: alertmanagertype: NodePortports:- port: 9093targetPort: 9093nodePort: 31000

Create the service using kubectl.

kubectl create -f Service.yaml

Then the AlertManager is also done.

You may access the system from the port 31000.

Ex :

AlertManager home page

--

--