This document outlines the various binary components that need to run to deliver a functioning Kubernetes cluster.
Master components are those that provide the cluster’s control plane. For example, master components are responsible for making global decisions about the cluster (e.g., scheduling), and detecting and responding to cluster events (e.g., starting up a new pod when a replication controller’s ‘replicas’ field is unsatisfied).
Master components could in theory be run on any node in the cluster. However, for simplicity, current set up scripts typically start all master components on the same VM, and does not run user containers on this VM. See high-availability.md for an example multi-master-VM setup.
Even in the future, when Kubernetes is fully self-hosting, it will probably be wise to only allow master components to schedule on a subset of nodes, to limit co-running with user-run pods, reducing the possible scope of a node-compromising security exploit.
kube-apiserver exposes the Kubernetes API; it is the front-end for the Kubernetes control plane. It is designed to scale horizontally (i.e., one scales it by running more of them– high-availability.md).
etcd is used as Kubernetes’ backing store. All cluster data is stored here. Proper administration of a Kubernetes cluster includes a backup plan for etcd’s data.
kube-controller-manager is a binary that runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce the number of moving pieces in the system, they are all compiled into a single binary and run in a single process.
These controllers include:
kube-scheduler watches newly created pods that have no node assigned, and selects a node for them to run on.
Addons are pods and services that implement cluster features. They don’t run on the master VM, but currently the default setup scripts that make the API calls to create these pods and services does run on the master VM. See: kube-master-addons
Addon objects are created in the “kube-system” namespace.
While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it.
Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.
Containers started by Kubernetes automatically include this DNS server in their DNS searches.
The kube-ui provides a read-only overview of the cluster state. Access the UI using kubectl proxy
Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.
Container Logging saves container logs to a central log store with search/browsing interface. There are two implementations:
Node components run on every node, maintaining running pods and providing them the Kubernetes runtime environment.
kubelet is the primary node agent. It:
kube-proxy enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding.
docker
is of course used for actually running containers.
rkt
is supported experimentally as an alternative to docker.
supervisord
is a lightweight process babysitting system for keeping kubelet and docker
running.
fluentd
is a daemon which helps provide cluster-level logging.