众所周知,kubernetes(简称k8s)是用于管理docker集群的,最近一段时间一直在折腾环境问题,在此写一篇博客,来帮助像我一样的小白,避免走弯路。
一、环境
集群环境
角色 |
IP地址 |
版本号 |
Docker版本 |
系统版本 |
master |
192.63.63.1/24 |
v1.9.1 |
17.12.0-ce |
Centos7.1 |
node1 |
192.63.63.10/24 |
v1.9.1 |
17.12.0-ce |
Centos7.1 |
node2 |
192.63.63.20/24 |
v1.9.1 |
17.12.0-ce |
Centos7.1 |
Master节点必需组件
组件名称 |
作用 |
版本号 |
etcd |
非关系型数据库 |
v1.9.1 |
kube-apiserver |
核心组件,所有组件均与其通信,提供Http Restful接口 |
v1.9.1 |
kube-controller-manager |
集群内部管理中心,负责各类资源管理,如RC,Pod,命名空间等 |
v1.9.1 |
kube-scheduler |
调度组件,负责node的调度 |
v1.9.1 |
Node节点必需组件
组件名称 |
作用 |
版本号 |
kubelet |
Node节点中核心组件,负责执行Master下发的任务 |
v1.9.1 |
kube-proxy |
代理,负责kubelet与apiserver网络。相当于负载均衡,将请求转到后端pod中 |
v1.9.1 |
二、安装
在看《Kubernetes权威指南》通过yum install方式安装。截止目前(2018-02-27)通过yum安装的版本是1.5.2,而最新版本是1.9.1。两个版本之间差异还是比较大,主要差异kubelet配置文件中不在支持api-server参数。
虽然yum安装的不是最新版本,但是我们还是可以借鉴一些内容,例如systemd脚本服务,k8s各个配置文件。
2.0 安装etcd
上面介绍了etcd是数据库,用于存储k8s相关数据。etcd并不属于k8s组件,因此需要单独安装一下,安装很方便,通过yum安装即可。目前yum安装的最新版本是3.2.11。
[root@localhost ~]# yum install etcd
2.1 下载并安装
最新版本下载地址,只需下载Server Binaries。因为node节点必需的组件,也包含在这个Server Binaries中。下载完毕后,进行解压并把可执行文件拷贝到系统目录中
[root@localhost packet]# [root@localhost packet]# tar -zxf kubernetes-server-linux-amd64.tar.gz [root@localhost packet]# ls kubernetes kubernetes-server-linux-amd64.tar.gz opensrc [root@localhost packet]# [root@localhost packet]# cd kubernetes/server/bin [root@localhost bin]# cp apiextensions-apiserver cloud-controller-manager hyperkube kubeadm kube-aggregator kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler mounter /usr/bin [root@localhost bin]#
2.2 配置systemd服务
下面这些文件均来自kubernetes1.5.2 rpm包,存放目录为/usr/lib/systemd/system
[root@localhost system]# cat kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target After=etcd.service [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/apiserver User=kube ExecStart=/usr/bin/kube-apiserver \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_ETCD_SERVERS \ $KUBE_API_ADDRESS \ $KUBE_API_PORT \ $KUBELET_PORT \ $KUBE_ALLOW_PRIV \ $KUBE_SERVICE_ADDRESSES \ $KUBE_ADMISSION_CONTROL \ $KUBE_API_ARGS Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@localhost system]# [root@localhost system]# cat kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/controller-manager User=kube ExecStart=/usr/bin/kube-controller-manager \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_CONTROLLER_MANAGER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@localhost system]# [root@localhost system]# [root@localhost system]# cat kubelet.service [Unit] Description=Kubernetes Kubelet Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/kubelet ExecStart=/usr/bin/kubelet \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBELET_API_SERVER \ $KUBELET_ADDRESS \ $KUBELET_PORT \ $KUBELET_HOSTNAME \ $KUBE_ALLOW_PRIV \ $KUBELET_ARGS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target [root@localhost system]# [root@localhost system]# [root@localhost system]# cat kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/proxy ExecStart=/usr/bin/kube-proxy \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_PROXY_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@localhost system]# [root@localhost system]# [root@localhost system]# cat kube-scheduler.service [Unit] Description=Kubernetes Scheduler Plugin Documentation=https://github.com/GoogleCloudPlatform/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/config EnvironmentFile=-/etc/kubernetes/scheduler User=kube ExecStart=/usr/bin/kube-scheduler \ $KUBE_LOGTOSTDERR \ $KUBE_LOG_LEVEL \ $KUBE_MASTER \ $KUBE_SCHEDULER_ARGS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@localhost system]#
2.3 配置k8s
通过systemd服务配置文件可知需创建/etc/kubernetes目录以及相关文件
[root@localhost kubernetes]# ls apiserver config controller-manager kubelet proxy scheduler [root@localhost kubernetes]# [root@localhost kubernetes]# cat config ### # kubernetes system config # # The following values are used to configure various aspects of all # kubernetes services, including # # kube-apiserver.service # kube-controller-manager.service # kube-scheduler.service # kubelet.service # kube-proxy.service # logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true" # journal message level, 0 is debug KUBE_LOG_LEVEL="--v=0" # Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false" # How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://127.0.0.1:8080" [root@localhost kubernetes]#
Apiserver需要将--insecure-bind-address地址修改为0.0.0.0(修改为大网ip地址),接收任意地址的连接。
[root@localhost kubernetes]# cat apiserver ### # kubernetes system config # # The following values are used to configure the kube-apiserver # # The address on the local server to listen to. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # The port on the local server to listen on. # KUBE_API_PORT="--port=8080" # Port minions listen on # KUBELET_PORT="--kubelet-port=10250" # Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" # Address range to use for services KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" # default admission control policies #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota" # Add your own! KUBE_API_ARGS="" [root@localhost kubernetes]#
Kubelet配置文件,最重要的配置是执行apiserver所在的地址,但是在v1.8版本之后不再支持--api-servers,因此需要注释掉,那么问题来了,kubelet是如何指定api-server地址呢?
[root@localhost kubernetes]# cat kubelet ### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=127.0.0.1" # The port for the info server to serve on KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=127.0.0.1" # location of the api-server ##KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080" # pod infrastructure container KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=docker.io/kubernetes/pause" # Add your own! KUBELET_ARGS="--fail-swap-on=false --cgroup-driver=cgroupfs --kubeconfig=/var/lib/kubelet/kubeconfig"
下面几个配置文件内容基本是空,没有什么内容。
[root@localhost kubernetes]# cat controller-manager ### # The following values are used to configure the kubernetes controller-manager # defaults from config and apiserver should be adequate # Add your own! KUBE_CONTROLLER_MANAGER_ARGS="" [root@localhost kubernetes]# [root@localhost kubernetes]# cat proxy ### # kubernetes proxy config # default config should be adequate # Add your own! KUBE_PROXY_ARGS="" [root@localhost kubernetes]# [root@localhost kubernetes]# cat scheduler ### # kubernetes scheduler config # default config should be adequate # Add your own! KUBE_SCHEDULER_ARGS="--loglevel=0"
以上配置均是在master节点中进行配置,配置完成后这样k8s就是一个单节点集群--既运行master也运行node的环境。只不过node还有问题,下面介绍。
三、http方式
前面已经介绍过,在v1.8版本之后kubelet不再支持api-server参数,那么在新版本kubelet如何才能与api-server进行通信呢?是通过kubeconfig参数,指定配置文件(这个地方是一个大坑,坑了我好长一段时间)。
在/etc/kubernetes/kubelet配置文件中有一个配置项,
KUBELET_ARGS="--fail-swap-on=false --cgroup-driver=cgroupfs --kubeconfig=/var/lib/kubelet/kubeconfig"
指定kubeconfig所在的目录,内容如下:
[root@localhost kubernetes]# [root@localhost kubernetes]# cat /var/lib/kubelet/kubeconfig apiVersion: v1 clusters: - cluster: server: http://127.0.0.1:8080 name: myk8s contexts: - context: cluster: myk8s user: "" name: myk8s-context current-context: myk8s-context kind: Config preferences: {} users: [] [root@localhost kubernetes]#
解释一下上面内容:
1) clusters - 代表集群,支持多个集群。里面需要制定server,即api-server所在地址。此处也支持https方式,后面详细介绍。
2) contexts - 集群上下文,支持多个上下文
3) current-context - 表示当前使用的上下文
其他字段在介绍https方式时在进行说明。
至此,单节点集群部署完成,我们需要启动各个服务。
[root@localhost k8s]# systemctl start docker [root@localhost k8s]# systemctl start etcd [root@localhost k8s]# systemctl start kube-apiserver [root@localhost k8s]# systemctl start kube-controller-manager [root@localhost k8s]# systemctl start kube-scheduler [root@localhost k8s]# systemctl start kubelet [root@localhost k8s]# systemctl start kube-proxy [root@localhost k8s]#
验证环境是否正常:
[root@localhost k8s]# [root@localhost k8s]# kubectl get nodes NAME STATUS ROLES AGE VERSION 127.0.0.1 Ready <none> 16d v1.9.1 [root@localhost k8s]# [root@localhost k8s]#
以上内容只是master节点上配置,下面我们在node1中配置http方式访问api-server。
首先需要将kubelet、kube-proxy进程以及先关配置文件拷贝到node1中,然后把可执行程序以及配置文件拷贝到对应目录中:
[root@node1 k8s_node]# [root@node1 k8s_node]# ls bin-file/ config-file/ bin-file/: kubelet kube-proxy config-file/: config kubeconfig kubelet kubelet.service kube-proxy.service proxy [root@node1 k8s_node]# [root@node1 k8s_node]# [root@node1 k8s_node]# mv bin-file/kubelet bin-file/kube-proxy /usr/bin [root@node1 k8s_node]# mkdir /etc/kubernetes [root@node1 k8s_node]# mv config-file/config config-file/kubelet config-file/proxy /etc/kubernetes/ [root@node1 k8s_node]# mv config-file/kubelet.service config-file/kube-proxy.service /usr/lib/systemd/system [root@node1 k8s_node]# [root@node1 k8s_node]# mkdir /var/lib/kubelet [root@node1 k8s_node]# mv config-file/kubeconfig /var/lib/kubelet/ [root@node1 k8s_node]#
重点:
1)修改/var/lib/kubelet/kubeconfig文件中server的ip地址,修改为http://192.63.63.1:8080
2)修改/etc/kubernetes/kubelet文件中KUBELET_HOSTNAME将其修改为"--hostname-override=node1"
分别启动服务docker、kubelet、kube-proxy,然后在master进行验证:
[root@localhost ~]# [root@localhost ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION 127.0.0.1 Ready <none> 17d v1.9.1 node1 Ready <none> 5m v1.9.1 [root@localhost ~]#
当出现name是node1,且status是Ready,则表示部署成功。
至此,k8s部署且http方式介绍完,下一篇介绍https方式。