How to get started, and achieve tasks, using Kubernetes

Edit This Page

Logging with Elasticsearch and Kibana

On the Google Compute Engine (GCE) platform the default cluster level logging support targets Google Cloud Logging as described at the Logging getting started page. Here we describe how to set up a cluster to ingest logs into Elasticsearch and view them using Kibana as an alternative to Google Cloud Logging.

To use Elasticsearch and Kibana for cluster logging you should set the following environment variable as shown below:


You should also ensure that KUBE_ENABLE_NODE_LOGGING=true (which is the default for the GCE platform).

Now when you create a cluster a message will indicate that the Fluentd node-level log collectors will target Elasticsearch:

$ cluster/
Project: kubernetes-satnam
Zone: us-central1-b
... calling kube-up
Project: kubernetes-satnam
Zone: us-central1-b
+++ Staging server tars to Google Storage: gs://kubernetes-staging-e6d0e81793/devel
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha1 = 6987c098277871b6d69623141276924ab687f89d)
+++ kubernetes-salt.tar.gz uploaded (sha1 = bdfc83ed6b60fa9e3bff9004b542cfc643464cd0)
Looking for already existing resources
Starting master and configuring firewalls
Created [].
NAME                 ZONE          SIZE_GB TYPE   STATUS
kubernetes-master-pd us-central1-b 20      pd-ssd READY
Created [].
+++ Logging using Fluentd to elasticsearch

The node level Fluentd collector pods and the Elasticsearech pods used to ingest cluster logs and the pod for the Kibana viewer should be running in the kube-system namespace soon after the cluster comes to life.

$ kubectl get pods --namespace=kube-system
NAME                                           READY     REASON    RESTARTS   AGE
elasticsearch-logging-v1-78nog                 1/1       Running   0          2h
elasticsearch-logging-v1-nj2nb                 1/1       Running   0          2h
fluentd-elasticsearch-kubernetes-node-5oq0     1/1       Running   0          2h
fluentd-elasticsearch-kubernetes-node-6896     1/1       Running   0          2h
fluentd-elasticsearch-kubernetes-node-l1ds     1/1       Running   0          2h
fluentd-elasticsearch-kubernetes-node-lz9j     1/1       Running   0          2h
kibana-logging-v1-bhpo8                        1/1       Running   0          2h
kube-dns-v3-7r1l9                              3/3       Running   0          2h
monitoring-heapster-v4-yl332                   1/1       Running   1          2h
monitoring-influx-grafana-v1-o79xf             2/2       Running   0          2h

Here we see that for a four node cluster there is a fluent-elasticsearch pod running which gathers the Docker container logs and sends them to Elasticsearch. The Fluentd collector communicates to a Kubernetes service that maps requests to specific Elasticsearch pods. Similarly, Kibana can also be accessed via a Kubernetes service definition.

$ kubectl get services --namespace=kube-system
NAME                    LABELS                                                                                              SELECTOR                        IP(S)          PORT(S)
elasticsearch-logging   k8s-app=elasticsearch-logging,,   k8s-app=elasticsearch-logging    9200/TCP
kibana-logging          k8s-app=kibana-logging,,                 k8s-app=kibana-logging   5601/TCP
kube-dns                k8s-app=kube-dns,,                      k8s-app=kube-dns            53/UDP
kubernetes              component=apiserver,provider=kubernetes                                                             <none>                       443/TCP
monitoring-grafana,                                       k8s-app=influxGrafana    80/TCP
monitoring-heapster,                                      k8s-app=heapster         80/TCP
monitoring-influxdb,                                      k8s-app=influxGrafana     8083/TCP

By default two Elasticsearch replicas are created and one Kibana replica is created.

$ kubectl get rc --namespace=kube-system
CONTROLLER                     CONTAINER(S)            IMAGE(S)                                          SELECTOR                                   REPLICAS
elasticsearch-logging-v1       elasticsearch-logging        k8s-app=elasticsearch-logging,version=v1   2
kibana-logging-v1              kibana-logging               k8s-app=kibana-logging,version=v1          1
kube-dns-v3                    etcd                         k8s-app=kube-dns,version=v3                1
monitoring-heapster-v4         heapster               k8s-app=heapster,version=v4                1
monitoring-influx-grafana-v1   influxdb         k8s-app=influxGrafana,version=v1           1

The Elasticsearch and Kibana services are not directly exposed via a publicly reachable IP address. Instead, they can be accessed via the service proxy running at the master. The URLs for accessing Elasticsearch and Kibana via the service proxy can be found using the kubectl cluster-info command.

$ kubectl cluster-info
Kubernetes master is running at
Elasticsearch is running at
Kibana is running at
KubeDNS is running at
KubeUI is running at
Grafana is running at
Heapster is running at
InfluxDB is running at

Before accessing the logs ingested into Elasticsearch using a browser and the service proxy URL we need to find out the admin password for the cluster using kubectl config view.

$ kubectl config view
- name: kubernetes-satnam_kubernetes-basic-auth
    password: 7GlspJ9Q43OnGIJO
    username: admin

The first time you try to access the cluster from a browser a dialog box appears asking for the username and password. Use the username admin and provide the basic auth password reported by kubectl config view for the cluster you are trying to connect to. Connecting to the Elasticsearch URL should then give the status page for Elasticsearch.

Elasticsearch Status

You can now type Elasticsearch queries directly into the browser. Alternatively you can query Elasticsearch from your local machine using curl but first you need to know what your bearer token is:

$ kubectl config view --minify
apiVersion: v1
- cluster:
    certificate-authority-data: REDACTED
  name: kubernetes-satnam_kubernetes
- context:
    cluster: kubernetes-satnam_kubernetes
    user: kubernetes-satnam_kubernetes
  name: kubernetes-satnam_kubernetes
current-context: kubernetes-satnam_kubernetes
kind: Config
preferences: {}
- name: kubernetes-satnam_kubernetes
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    token: JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp

Now you can issue requests to Elasticsearch:

$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure
  "status" : 200,
  "name" : "Vance Astrovik",
  "cluster_name" : "kubernetes-logging",
  "version" : {
    "number" : "1.5.2",
    "build_hash" : "62ff9868b4c8a0c45860bebb259e21980778ab1c",
    "build_timestamp" : "2015-04-27T09:21:06Z",
    "build_snapshot" : false,
    "lucene_version" : "4.10.4"
  "tagline" : "You Know, for Search"

Note that you need the trailing slash at the end of the service proxy URL. Here is an example of a search:

$ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insecure
  "took" : 7,
  "timed_out" : false,
  "_shards" : {
    "total" : 6,
    "successful" : 6,
    "failed" : 0
  "hits" : {
    "total" : 123711,
    "max_score" : 1.0,
    "hits" : [ {
      "_index" : ".kibana",
      "_type" : "config",
      "_id" : "4.0.2",
      "_score" : 1.0,
    }, {
      "_index" : "logstash-2015.06.22",
      "_type" : "fluentd",
      "_id" : "AU4c_GvFZL5p_gZ8dxtx",
      "_score" : 1.0,
      "_source":{"log":"synthetic-logger-10lps-pod: 31: 2015-06-22 20:35:33.597918073+00:00\n","stream":"stdout","tag":"kubernetes.synthetic-logger-10lps-pod_default_synth-lgr","@timestamp":"2015-06-22T20:35:33+00:00"}
    }, {
      "_index" : "logstash-2015.06.22",
      "_type" : "fluentd",
      "_id" : "AU4c_GvFZL5p_gZ8dxt2",
      "_score" : 1.0,
      "_source":{"log":"synthetic-logger-10lps-pod: 36: 2015-06-22 20:35:34.108780133+00:00\n","stream":"stdout","tag":"kubernetes.synthetic-logger-10lps-pod_default_synth-lgr","@timestamp":"2015-06-22T20:35:34+00:00"}
    } ]

The Elasticsearch website contains information about URI search queries which can be used to extract the required logs.

Alternatively you can view the ingested logs using Kibana. The first time you visit the Kibana URL you will be presented with a page that asks you to configure your view of the ingested logs. Select the option for timeseries values and select @timestamp. On the following page select the Discover tab and then you should be able to see the ingested logs. You can set the refresh interval to 5 seconds to have the logs regulary refreshed. Here is a typical view of ingested logs from the Kibana viewer.

Kibana logs

Another way to access Elasticsearch and Kibana in the cluster is to use kubectl proxy which will serve a local proxy to the remote master:

$ kubectl proxy
Starting to serve on localhost:8001

Now you can visit the URL http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging to contact Elasticsearch and http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kibana-logging to access the Kibana viewer.