版权声明:知识就是为了传播! https://blog.csdn.net/weixin_36171533/article/details/82684464
statefulset有状态应用副本集
PetSet -> StatefulSet
1.文档且唯一的网络标识符
2.稳定且持久的存储
3.有序,平滑的部署和扩展
4.有序,平滑的删除和终止
5.有序的滚动更新
三个组件:
headless service
StatefulSet
volumeClaimTemplate
实验的前期准备条件:
master;192.168.68.10
node1: 192.168.68.20
node2: 192.168.68.30
node3: 192.168.68.40
node3准备:解析node3到本机,同步其他的master和node
各个节点安装NFS
NFS目录为:
[root@node3 /]# tree data
data
└── volumes
├── index.html
├── v1
├── v2
│ └── index.html
├── v3
├── v4
└── v5
启动nfs
systemctl start nfs
查看pvc
[root@master configmap]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Available 1d
pv002 7Gi RWO,RWX Retain Bound default/mypvc 1d
pv003 8Gi RWO,RWX Retain Available 1d
pv004 10Gi RWO,RWX Retain Available 1d
pv005 12Gi RWO,RWX Retain Available 1d
[root@master configmap]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mypvc Bound pv002 7Gi RWO,RWX 1d
删除已经挂载的pvc
kubectl get pvc
kubectl delete pvc/mypvc
kubectl get pv
[root@master configmap]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Available 1d
pv002 7Gi RWO,RWX Retain Released default/mypvc 1d
pv003 8Gi RWO,RWX Retain Available 1d
pv004 10Gi RWO,RWX Retain Available 1d
pv005 12Gi RWO,RWX Retain Available 1d
Released default/mypvc 状态是已经释放了
删除所有的pv
kubectl delete pv --all
[root@master configmap]# kubectl delete pv --all
persistentvolume "pv001" deleted
persistentvolume "pv002" deleted
persistentvolume "pv003" deleted
persistentvolume "pv004" deleted
persistentvolume "pv005" deleted
[root@master configmap]# kubectl get pv
No resources found.
开始重新创建pv
创建5个节点,5个PV
[root@master volumes]# cat pvs-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs:
path: /data/volumes/v1
server: node3
accessModes: ["ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
labels:
name: pv002
spec:
nfs:
path: /data/volumes/v2
server: node3
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
labels:
name: pv003
spec:
nfs:
path: /data/volumes/v1
server: node3
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
name: pv004
spec:
nfs:
path: /data/volumes/v4
server: node3
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/volumes/v5
server: node3
accessModes: ["ReadWriteMany","ReadWriteOnce"]
capacity:
storage: 5Gi
---
[root@master volumes]# kubectl apply -f pvs-demo.yaml
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Available 21s
pv002 5Gi RWO Retain Available 21s
pv003 5Gi RWO,RWX Retain Available 21s
pv004 5Gi RWO,RWX Retain Available 21s
pv005 5Gi RWO,RWX Retain Available 21s
yaml文件内容:
[root@master volumes]# cat stateful-demo-1.yaml
apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: myapp-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
spec:
serviceName: myapp
replicas: 3
selector:
matchLabels:
app: myapp-pod
template:
metadata:
labels:
app: myapp-pod
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- containerPort: 80
name: web
volumeMounts:
- name: myappdata
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: myappdata
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 2Gi
[root@master volumes]# kubectl apply -f stateful-demo.yaml
service/myapp created
statefulset.apps/myapp created
查看状态
[root@master volumes]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 6s
myapp-1 1/1 Running 0 5s
myapp-2 1/1 Running 0 3s
[root@master volumes]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 11s
myapp-1 1/1 Running 0 10s
myapp-2 1/1 Running 0 8s
[root@master volumes]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv002 5Gi RWO 24s
myappdata-myapp-1 Bound pv004 5Gi RWO,RWX 23s
myappdata-myapp-2 Bound pv001 5Gi RWO,RWX 21s
[root@master volumes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
myapp ClusterIP None <none> 80/TCP 1m
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Bound default/myappdata-myapp-2 15m
pv002 5Gi RWO Retain Bound default/myappdata-myapp-0 15m
pv003 5Gi RWO,RWX Retain Available 15m
pv004 5Gi RWO,RWX Retain Bound default/myappdata-myapp-1 15m
pv005 5Gi RWO,RWX Retain Available 15m
通过上面发现:
pods 的名字是自己定义的
自动关联三个5G的空间
自动绑定空间
[root@master volumes]# kubectl get sts
NAME DESIRED CURRENT AGE
myapp 3 3 15m
测试删除:
[root@master volumes]# kubectl delete -f stateful-demo-1.yaml
service "myapp" deleted
statefulset.apps "myapp" deleted
通过监控发现:
kubectl get pods -w
[root@master ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 21m
myapp-1 1/1 Running 0 20m
myapp-2 1/1 Running 0 20m
myapp-1 1/1 Terminating 0 21m
myapp-0 1/1 Terminating 0 21m
myapp-2 1/1 Terminating 0 21m
myapp-1 0/1 Terminating 0 21m
myapp-2 0/1 Terminating 0 21m
myapp-0 0/1 Terminating 0 21m
myapp-0 0/1 Terminating 0 21m
myapp-0 0/1 Terminating 0 21m
myapp-2 0/1 Terminating 0 21m
myapp-2 0/1 Terminating 0 21m
myapp-1 0/1 Terminating 0 21m
myapp-1 0/1 Terminating 0 21m
创建测试
[root@master volumes]# kubectl apply -f stateful-demo-1.yaml
service/myapp created
statefulset.apps/myapp created
[root@master ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
myapp-0 0/1 Pending 0 0s
myapp-0 0/1 Pending 0 0s
myapp-0 0/1 ContainerCreating 0 0s
myapp-0 1/1 Running 0 1s
myapp-1 0/1 Pending 0 0s
myapp-1 0/1 Pending 0 0s
myapp-1 0/1 ContainerCreating 0 0s
myapp-1 1/1 Running 0 0s
myapp-2 0/1 Pending 0 0s
myapp-2 0/1 Pending 0 0s
myapp-2 0/1 ContainerCreating 0 0s
myapp-2 1/1 Running 0 2s
###################################
由此发现,删除是2,1,0
创建时0,1,2
无论怎么创建都是绑定的固定的存储卷
###################################
###################################
它支持滚动更新
###################################
注意:每个的pod名称都是可以被解析的
myapp-0都是可以解析的
验证
验证:
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 5m
myapp-1 1/1 Running 0 5m
myapp-2 1/1 Running 0 5m
[root@master ~]# kubectl exec -it myapp-0 /bin/sh
/ # nslookup myapp-0.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
Name: myapp-0.myapp.default.svc.cluster.local
Address 1: 10.244.2.65 myapp-0.myapp.default.svc.cluster.local
/ # nslookup myapp-1.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
Name: myapp-1.myapp.default.svc.cluster.local
Address 1: 10.244.1.67 myapp-1.myapp.default.svc.cluster.local
/ # nslookup myapp-2.myapp.default.svc.cluster.local
nslookup: can't resolve '(null)': Name does not resolve
Name: myapp-2.myapp.default.svc.cluster.local
Address 1: 10.244.2.66 myapp-2.myapp.default.svc.cluster.local
注意:解析的时候必须跟无头服务
myapp-0 pod名
myapp 服务名
default.svc.cluster.local 名称空间
规则格式:
pod_name.service_name.ns_name.svc.cluster.local
#################
扩容实验:
将myapp服务扩容到5个
[root@master volumes]# kubectl scale sts myapp --replicas=5
statefulset.apps/myapp scale
监控:
[root@master ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 14m
myapp-1 1/1 Running 0 14m
myapp-2 1/1 Running 0 14m
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 Pending 0 0s
myapp-3 0/1 ContainerCreating 0 0s
myapp-3 1/1 Running 0 1s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 Pending 0 0s
myapp-4 0/1 ContainerCreating 0 0s
myapp-4 1/1 Running 0 1s
[root@master volumes]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 15m
myapp-1 1/1 Running 0 15m
myapp-2 1/1 Running 0 15m
myapp-3 1/1 Running 0 1m
myapp-4 1/1 Running 0 1m
[root@master volumes]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
myappdata-myapp-0 Bound pv002 5Gi RWO 39m
myappdata-myapp-1 Bound pv004 5Gi RWO,RWX 39m
myappdata-myapp-2 Bound pv001 5Gi RWO,RWX 39m
myappdata-myapp-3 Bound pv003 5Gi RWO,RWX 1m
myappdata-myapp-4 Bound pv005 5Gi RWO,RWX 1m
#################
缩减实验:
[root@master volumes]# kubectl scale sts myapp --replicas=2
或者:
[root@master volumes]# kubectl patch sts myapp -p '{"spec":{"replicas":2}}'
statefulset.apps/myapp patched
监控:可以发现是逆序的
[root@master ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 17m
myapp-1 1/1 Running 0 17m
myapp-2 1/1 Running 0 17m
myapp-3 1/1 Running 0 2m
myapp-4 1/1 Running 0 2m
myapp-4 1/1 Terminating 0 3m
myapp-4 0/1 Terminating 0 3m
myapp-4 0/1 Terminating 0 3m
myapp-4 0/1 Terminating 0 3m
myapp-3 1/1 Terminating 0 3m
myapp-3 0/1 Terminating 0 3m
myapp-3 0/1 Terminating 0 3m
myapp-3 0/1 Terminating 0 3m
myapp-2 1/1 Terminating 0 18m
myapp-2 0/1 Terminating 0 18m
myapp-2 0/1 Terminating 0 18m
myapp-2 0/1 Terminating 0 18m
[root@master volumes]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myapp-0 1/1 Running 0 19m
myapp-1 1/1 Running 0 19m
[root@master volumes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
myapp ClusterIP None <none> 80/TCP 19m
[root@master volumes]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv001 5Gi RWO,RWX Retain Bound default/myappdata-myapp-2 43m
pv002 5Gi RWO Retain Bound default/myappdata-myapp-0 43m
pv003 5Gi RWO,RWX Retain Bound default/myappdata-myapp-3 43m
pv004 5Gi RWO,RWX Retain Bound default/myappdata-myapp-1 43m
pv005 5Gi RWO,RWX Retain Bound default/myappdata-myapp-4 43m
#################
更新实验:
金丝雀更新:
先更新部分版本,如果使用没有问题,然后再手动继续更新后面的
例如有五个myapp:
myapp1,myapp2,myapp3,myapp4,myapp5,
我想更新大于3的
partition:N
>=N
>=0 所有的都更新
扩展Pod到5个
[root@master volumes]# kubectl patch sts myapp -p '{"spec":{"replicas":5}}'
kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}'
查看更新策略:
[root@master volumes]# kubectl describe sts myapp
Partition: 4
开始更新到V2版本
[root@master volumes]# kubectl set image sts/myapp myapp=ikubernetes/myapp:v2
statefulset.apps/myapp image updated
[root@master volumes]# kubectl get sts -o wide 控制器已经更新到v2
NAME DESIRED CURRENT AGE CONTAINERS IMAGES
myapp 5 5 37m myapp ikubernetes/myapp:v2
kubectl describe pods myapp-0 到myapp-3
Image: ikubernetes/myapp:v1
kubectl describe pods myapp-4
Image: ikubernetes/myapp:v2
如果想更新全部到v2版本:
kubectl patch sts myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}'
kubectl set image sts/myapp myapp=ikubernetes/myapp:v2
这样就全部更新成功了