NFS存储的动态供给 ¶
什么是动态供给 ¶
什么是动态供给
Kubernetes 从 1.4 版起引入了一个新的资源对象 StorageClass ,可用于将存储资源定义为具有显著特性的类(Class)而不是具体的 PV。用户通过 PVC 直接向意向的类别发出申请,匹配由管理员事先创建的 PV,或者由其按需为用户动态创建 PV,这样就免去了需要先创建 PV 的过程。
* 静态存储需要用户申请 PVC 时保证容量和读写类型与预置 PV 的容量及读写类型完全匹配, 而动态存储则无需如此.
* 管理员无需预先创建大量的 PV 作为存储资源
本章节使用 NFS 文件系统,请先部署 NFS 文件系统
使用 NFS 文件系统创建存储动态供给 ¶
使用 NFS 文件系统创建存储动态供给
-
PV 对存储系统的支持可通过其插件来实现,目前,Kubernetes支持如下类型的插件。
-
官方插件是不支持NFS动态供给的,但是我们可以用第三方的插件来实现
1. 下载并创建 storageclass ¶
[root@k8smaster001 storageclass]# wget https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/class.yaml -O storageclass-nfs.yml && ls
storageclass-nfs.yml
2. 编辑 storageclass 资源清单
apiVersion: storage.k8s.io/v1
kind: StorageClass # 类型
metadata:
name: nfs-client # 名称,要使用就需要调用此名称
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # # 动态供给插件
parameters:
archiveOnDelete: "false" # 删除数据时是否存档,false表示不存档,true表示存档
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: prod-nfs
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
reclaimPolicy: Retain #回收策略,包括 "Retain"、"Recycle" 和 "Delete"。 对于动态配置的 PersistentVolumes 来说,默认回收策略为 "Delete"。
allowVolumeExpansion: true # 允许扩容
[root@k8smaster001 storageclass]# kubectl apply -f storageclass-nfs.yml
storageclass.storage.k8s.io/nfs-client created
[root@k8smaster001 storageclass]# kubectl apply -f prod-storageclass-nfs.yml
storageclass.storage.k8s.io/prod-nfs created
[root@k8smaster001 storageclass]# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 10s
prod-nfs k8s-sigs.io/nfs-subdir-external-provisioner Retain Immediate true 40s
注意查看两个区别
- RECLAIMPOLICY: pv 回收策略,pod 或 pvc 被删除后,pv 是否删除还是保留。默认是删除。
- VOLUMEBINDINGMODE: Immediate 模式下 PVC 与 PV 立即绑定,主要是不等待相关 Pod 调度完成,不关心其运行节点,直接完成绑定。相反的 WaitForFirstConsumer模式下需要等待 Pod 调度完成后进行 PV 绑定。
- ALLOWVOLUMEEXPANSION: 是否允许 pvc 扩容
2. 下载并创建 rbac ¶
因为 storage 自动创建 pv 需要经过 kube-apiserver,所以需要授权
[root@k8smaster001 storageclass]# wget https://raw.githubusercontent.com/kubernetes-sigs/nfs-subdir-external-provisioner/master/deploy/rbac.yaml -O storageclass-nfs-rbac.yaml
storageclass-nfs-rbac.yaml
2. 编辑 storageclass 的 rbac 资源清单
[root@k8smaster001 storageclass]# cat storageclass-nfs-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
[root@k8smaster001 storageclass]# kubectl apply -f storageclass-nfs-rbac.yaml
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
3. 创建动态供给 使用的 deployment ¶
创建一个 deployment 来专门实现 pv 与 pvc 的自动创建
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-beijing.aliyuncs.com/pylixm/nfs-subdir-external-provisioner:v4.0.0
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.3.201
- name: NFS_PATH
value: /nfs-data/nfs/
volumes:
- name: nfs-client-root
nfs:
server: 192.168.3.201
path: /nfs-data/nfs/
[root@k8smaster001 storageclass]# kubectl apply -f deploy-nfs-client-provisioner.yml
deployment.apps/nfs-client-provisioner created
[root@k8smaster001 storageclass]# kubectl get pods |grep nfs-client-provisioner
nfs-client-provisioner-94f889d79-ppc4k 1/1 Running 0 73s
https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/change-default-storage-class/
root@linuxnbg-1:/data/nfs# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Retain Immediate true 75s
root@linuxnbg-1:/data/nfs# kubectl patch storageclass nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/nfs-client patched
root@linuxnbg-1:/data/nfs# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client (default) k8s-sigs.io/nfs-subdir-external-provisioner Retain Immediate true 2m37s
测试存储动态供给是否可用 ¶
手动创建 PVC 资源,绑定自动创建的 PV ¶
手动创建 pvc 资源,绑定自动创建的 PV
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
storageClassName: nfs-client
[root@k8smaster001 storageclass]# kubectl apply -f pvc.yaml
persistentvolumeclaim/test-claim created
[root@k8smaster001 storageclass]# kubectl get pvc|grep test-claim
test-claim Bound pvc-8b969249-4565-4bdb-91da-752d3ed26ba0 1Mi RWX nfs-client <unset> 65s
[root@k8smaster001 storageclass]# kubectl get pv|grep test-claim
pvc-8b969249-4565-4bdb-91da-752d3ed26ba0 1Mi RWX Delete Bound default/test-claim nfs-client <unset> 26s
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: gcr.io/google_containers/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
[root@k8smaster001 storageclass]# kubectl apply -f test-pod.yaml
pod/test-pod created
[root@k8smaster001 storageclass]# kubectl get pod test-pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pod 0/1 Completed 0 46s 10.244.35.49 k8sworker002 <none> <none>
[root@k8smaster001 storageclass]# ls /nfs-data/nfs/default-test-claim-pvc-8b969249-4565-4bdb-91da-752d3ed26ba0/SUCCESS
/nfs-data/nfs/default-test-claim-pvc-8b969249-4565-4bdb-91da-752d3ed26ba0/SUCCESS
自动创建 PVC 资源 , 绑定自动创建的 PV ¶
volumeClaimTemplates 是 StatefulSet 中的一个字段,它用于定义每个 Pod 所需的持久卷声明(Persistent Volume Claim,PVC)模板。通过使用这个字段,可以自动为 StatefulSet 中的每个 Pod 创建和绑定相应的 PVC。
此时还需手动创建 pvc 资源,绑定自动创建的 PV ,我们可以使用 volumeClaimTemplates 自动创建 PVC 绑定自动创建的 PV。
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
imagePullSecrets:
- name: huoban-harbor
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "nfs-client" # 可以选择 StorageClass ,上面创建了两个 StorageClass,nfs-client 与 prod-nfs(允许扩容,删除 pv 后、数据不删除)
resources:
requests:
storage: 1Gi
[root@k8smaster001 storageclass]# kubectl apply -f nginx-sc.yaml
service/nginx created
statefulset.apps/web created
[root@k8smaster001 storageclass]# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 3m31s
web-1 1/1 Running 0 3m5s
[root@k8smaster001 storageclass]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE
pvc-278623d3-8efb-4a06-ae30-0a3e701e697f 1Gi RWO Retain Bound default/www-web-0 prod-nfs <unset> 4m7s
pvc-bd16d9df-6fbb-4b50-a5f9-c0dceea29fa0 1Gi RWO Retain Bound default/www-web-1 prod-nfs <unset> 3m46s
[root@k8smaster001 storageclass]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
www-web-0 Bound pvc-278623d3-8efb-4a06-ae30-0a3e701e697f 1Gi RWO prod-nfs <unset> 4m10s
www-web-1 Bound pvc-bd16d9df-6fbb-4b50-a5f9-c0dceea29fa0 1Gi RWO prod-nfs <unset> 3m48s
[root@k8smaster001 storageclass]# ls /nfs-data/nfs/
default-www-web-0-pvc-70da74f5-2e9b-4d38-9438-99ed9dd0e141 default-www-web-1-pvc-d1711175-cee7-4a61-93a3-b42b679accd0