This is a getting started guide for Fedora. It is a manual configuration so you understand all the underlying packages / services / ports, etc…
This guide will only get ONE node (previously minion) working. Multiple nodes require a functional networking configuration done outside of Kubernetes. Although the additional Kubernetes configuration requirements should be obvious.
The Kubernetes package provides a few services: kube-apiserver, kube-scheduler, kube-controller-manager, kubelet, kube-proxy. These services are managed by systemd and the configuration resides in a central location: /etc/kubernetes. We will break the services up between the hosts. The first host, fed-master, will be the Kubernetes master. This host will run the kube-apiserver, kube-controller-manager, and kube-scheduler. In addition, the master will also run etcd (not needed if etcd runs on a different host but this guide assumes that etcd and Kubernetes master run on the same host). The remaining host, fed-node will be the node and run kubelet, proxy and docker.
System Information:
Hosts:
fed-master = 192.168.121.9
fed-node = 192.168.121.65
Prepare the hosts:
yum -y install --enablerepo=updates-testing kubernetes
yum -y install etcd iptables
echo "192.168.121.9 fed-master
192.168.121.65 fed-node" >> /etc/hosts
# Comma separated list of nodes in the etcd cluster
KUBE_MASTER="--master=http://fed-master:8080"
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
systemctl disable iptables-services firewalld
systemctl stop iptables-services firewalld
Configure the Kubernetes services on the master.
# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:4001"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# Add your own!
KUBE_API_ARGS=""
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:4001"
mkdir /var/run/kubernetes
chown kube:kube /var/run/kubernetes
chmod 750 /var/run/kubernetes
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
Addition of nodes:
Create following node.json file on Kubernetes master node:
{
"apiVersion": "v1",
"kind": "Node",
"metadata": {
"name": "fed-node",
"labels":{ "name": "fed-node-label"}
},
"spec": {
"externalID": "fed-node"
}
}
Now create a node object internally in your Kubernetes cluster by running:
$ kubectl create -f ./node.json
$ kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Unknown
Please note that in the above, it only creates a representation for the node
fed-node internally. It does not provision the actual fed-node. Also, it
is assumed that fed-node (as specified in name
) can be resolved and is
reachable from Kubernetes master node. This guide will discuss how to provision
a Kubernetes node (fed-node) below.
Configure the Kubernetes services on the node.
We need to configure the kubelet on the node.
###
# Kubernetes kubelet (node) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=fed-node"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://fed-master:8080"
# Add your own!
#KUBELET_ARGS=""
for SERVICES in kube-proxy kubelet docker; do
systemctl restart $SERVICES
systemctl enable $SERVICES
systemctl status $SERVICES
done
kubectl get nodes
NAME LABELS STATUS
fed-node name=fed-node-label Ready
To delete fed-node from your Kubernetes cluster, one should run the following on fed-master (Please do not do it, it is just for information):
kubectl delete -f ./node.json
You should be finished!
The cluster should be running! Launch a test pod.
You should have a functional cluster, check out 101!
IaaS Provider | Config. Mgmt | OS | Networking | Docs | Conforms | Support Level |
---|---|---|---|---|---|---|
Bare-metal | custom | Fedora | none | docs | Project |
For support level information on all solutions, see the Table of solutions chart.