版权声明:欢迎转载,请注明出处,谢谢。 https://blog.csdn.net/boling_cavalry/article/details/83692606
本文是《CentOS7环境部署kubenetes1.12版本五部曲》系列的第二篇,上篇文章我们实战了kubernetes环境中每台机器都要做的准备工作,今天就用准备好的机器来创建kubernetes环境的master节点;
官方文档
官方文档是最权威的参考:https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
前提
本次实战需要科学上网,才能kubernetes相关的操作用于学习和实践;
实战操作
- ssh登录master节点,身份是root;
- 修改/etc/hostname,确保每台机器的hostname是唯一的;
- 初始化kubernetes:
kubeadm init \
--kubernetes-version=v1.12.2 \
--pod-network-cidr=10.244.0.0/16
此时控制台输出如下,表示正在下载docker镜像:
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
- 经过漫长等待,镜像下载完毕后初始化成功,控制台输出如下信息:
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.119.157:6443 --token jtoche.kcb0kvylmdyfh089 --discovery-token-ca-cert-hash sha256:76090108cf1281c3c2b82b315f25d85380fadfa545581745c13600a0800016df
请将最后一整行的内容"kubeadm join 192.168.119.157:6443 …"保存好,这是node节点加入kubernates环境时用到的;
- 上面的输出信息中提示要做些配置信息,执行以下命令:
mkdir -p $HOME/.kube \
&& cp -i /etc/kubernetes/admin.conf $HOME/.kube/config \
&& chown $(id -u):$(id -g) $HOME/.kube/config
- 部署pod network,官方文档如下图所示,有多种方案,如果选用Flannel,那么在执行kubeadmin init命令时,要带上绿框中的参数,这个我们刚才在执行kubeadmin init时已经带上了:
如上图红框所示,安装Flannel需要执行如下命令:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
执行成功后控制台输入如下:
[root@localhost ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
- 执行以下命令查看pod情况:
kubectl get pods --all-namespaces
控制台输出如下,如果您的pod信息少于以下内容,则有可能是某些:
[root@localhost ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-576cbf47c7-564dg 1/1 Running 0 164m
kube-system coredns-576cbf47c7-snqkd 1/1 Running 0 164m
kube-system etcd-localhost.localdomain 1/1 Running 0 164m
kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 163m
kube-system kube-controller-manager-localhost.localdomain 1/1 Running 0 163m
kube-system kube-flannel-ds-amd64-r8wbb 1/1 Running 0 4m17s
kube-system kube-proxy-z7kn2 1/1 Running 0 164m
kube-system kube-scheduler-localhost.localdomain 1/1 Running 0 163m
- 执行命令docker images看看下载了哪些镜像:
[root@localhost ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.12.2 15e9da1ca195 10 days ago 96.5 MB
k8s.gcr.io/kube-apiserver v1.12.2 51a9c329b7c5 10 days ago 194 MB
k8s.gcr.io/kube-controller-manager v1.12.2 15548c720a70 10 days ago 164 MB
k8s.gcr.io/kube-scheduler v1.12.2 d6d57c76136c 10 days ago 58.3 MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 6 weeks ago 220 MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 2 months ago 39.2 MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 9 months ago 44.6 MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 10 months ago 742 kB
至此,master节点已经部署成功,下一章继续实战,将node加入集群环境;