介绍 DaemonSet
DaemonSet 控制器确保所有(或一部分)的节点都运行了一个指定的 Pod 副本。
-
每当向集群中添加一个节点时,指定的 Pod 副本也将添加到该节点上
-
当节点从集群中移除时,Pod 也就被垃圾回收了
-
删除一个 DaemonSet 可以清理所有由其创建的 Pod
DaemonSet 的典型使用场景有:
-
在每个节点上运行集群的存储守护进程,例如 glusterd、ceph
-
在每个节点上运行日志收集守护进程,例如 fluentd、logstash
-
在每个节点上运行监控守护进程,例如 Prometheus Node Exporter、Sysdig Agent
、collectd、Dynatrace OneAgent 、APPDynamics Agent 、Datadog agent 、New
Relic agent 、Ganglia gmond、Instana Agent 等
通常情况下,一个 DaemonSet 将覆盖所有的节点。复杂一点儿的用法,可能会为某一类守护进程设置多个 DaemonSets,每一个 DaemonSet 针对不同类硬件类型设定不同的内存、cpu请求。
示例:
配置yaml文件(注意name一定要一致):
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: daemonset-example
labels:
app: daemonset
spec:
selector:
matchLabels:
name: daemonset-example
template:
metadata:
labels:
name: daemonset-example
spec:
containers:
- name: daemonset-example
image: nginx:v1
创建
[root@apiserver ~]# kubectl create -f daemont-set.yaml
daemonset.apps/daemonset-example created
查看pod(因为我只创建了一个node节点,所以看不出所有节点都创建了pod):
[root@apiserver ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
daemonset-example-2pkql 1/1 Running 0 27s
删除之后发现又重新起了一个pod:
[root@apiserver ~]# kubectl delete pod daemonset-example-2pkql
pod "daemonset-example-2pkql" deleted
[root@apiserver ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
daemonset-example-mxwsw 1/1 Running 0 2s
介绍控制器 - Job
Kubernetes中的 Job 对象将创建一个或多个 Pod,并确保指定数量的 Pod 可以成功执行到进程正常结束:
- 当 Job 创建的 Pod 执行成功并正常结束时,Job 将记录成功结束的 Pod 数量
- 当成功结束的 Pod 达到指定的数量时,Job 将完成执行
- 删除 Job 对象时,将清理掉由 Job 创建的 Pod
一个简单的例子是:创建一个 Job 对象用来确保一个 Pod 的成功执行并结束。在第一个 Pod 执行失败或者被删除(例如,节点硬件故障或机器重启)的情况下,该 Job 对象将创建一个新的 Pod 以重新执行。
当然,您也可以使用 Job 对象并行执行多个 Pod。
示例:
创建yaml文件(意思是使用perl语言打印出π小数点后的两千位):
apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["perl","-Mbignum=bpi","-wle","print bpi(2000)"]
restartPolicy: Never
创建job
[root@apiserver ~]# kubectl create -f job.yaml
job.batch/pi created
查看pod(显示任务已经完成):
[root@apiserver ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pi-h294l 0/1 Completed 0 7s
查看日志(已经打印出π后的2000位):
[root@apiserver ~]# kubectl logs pi-h294l
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303819644288109756659334461284756482337867831652712019091456485669234603486104543266482133936072602491412737245870066063155881748815209209628292540917153643678925903600113305305488204665213841469519415116094330572703657595919530921861173819326117931051185480744623799627495673518857527248912279381830119491298336733624406566430860213949463952247371907021798609437027705392171762931767523846748184676694051320005681271452635608277857713427577896091736371787214684409012249534301465495853710507922796892589235420199561121290219608640344181598136297747713099605187072113499999983729780499510597317328160963185950244594553469083026425223082533446850352619311881710100031378387528865875332083814206171776691473035982534904287554687311595628638823537875937519577818577805321712268066130019278766111959092164201989380952572010654858632788659361533818279682303019520353018529689957736225994138912497217752834791315155748572424541506959508295331168617278558890750983817546374649393192550604009277016711390098488240128583616035637076601047101819429555961989467678374494482553797747268471040475346462080466842590694912933136770289891521047521620569660240580381501935112533824300355876402474964732639141992726042699227967823547816360093417216412199245863150302861829745557067498385054945885869269956909272107975093029553211653449872027559602364806654991198818347977535663698074265425278625518184175746728909777727938000816470600161452491921732172147723501414419735685481613611573525521334757418494684385233239073941433345477624168625189835694855620992192221842725502542568876717904946016534668049886272327917860857843838279679766814541009538837863609506800642251252051173929848960841284886269456042419652850222106611863067442786220391949450471237137869609563643719172874677646575739624138908658326459958133904780275901
介绍CronJob
CronJob 按照预定的时间计划(schedule)创建 Job。一个 CronJob 对象类似于 crontab (cron table) 文件中的一行记录。该对象根据 Cron格式定义的时间计划,周期性地创建 Job 对象。
Schedule:所有 CronJob 的 schedule 中所定义的时间,都是基于 master 所在时区来进行计算的。
创建yaml文件:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *" #意思是每一分钟创建一个job和pod
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
查看结果:
[root@apiserver ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-1577785980-ncn2r 0/1 Completed 0 18s
[root@apiserver ~]# kubectl get job
NAME COMPLETIONS DURATION AGE
hello-1577785980 1/1 3s 26s
[root@apiserver ~]# kubectl get cronjobs
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
hello */1 * * * * False 0 41s 89s
[root@apiserver ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-1577785980-ncn2r 0/1 Completed 0 48s
[root@apiserver ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-1577785980-ncn2r 0/1 Completed 0 49s
[root@apiserver ~]# kubectl get pod -w
NAME READY STATUS RESTARTS AGE
hello-1577785980-ncn2r 0/1 Completed 0 52s
hello-1577786040-hkn2r 0/1 Pending 0 0s
hello-1577786040-hkn2r 0/1 Pending 0 0s
hello-1577786040-hkn2r 0/1 ContainerCreating 0 0s
hello-1577786040-hkn2r 0/1 ContainerCreating 0 1s
hello-1577786040-hkn2r 0/1 Completed 0 2s
hello-1577786040-hkn2r 0/1 Completed 0 3s
[root@apiserver ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-1577785980-ncn2r 0/1 Completed 0 3m2s
hello-1577786040-hkn2r 0/1 Completed 0 2m2s
hello-1577786100-xq9cr 0/1 Completed 0 62s
hello-1577786160-85dw4 0/1 ContainerCreating 0 2s
注意不用的时候一定要删掉,因为他会一直创建。