今天学习下持久化存储的资源对象StorageClass,这个出现主要是通过 StorageClass 的定义,管理员可以将存储资源定义为某种类型的资源,比如快速存储、慢速存储等,kubernetes根据 StorageClass 的描述就可以非常直观的知道各种存储资源的具体特性了,这样就可以根据应用的特性去申请合适的存储资源了。
我们此处存储还是用NFS,创建StorageClass大概有以下几步
1)创建Service Account用来管控NFS provisioner在k8s集群中运行的权限 2)创建StorageClass,负责建立PVC并调用NFS provisioner进行预定的工作,并让PV与PVC建立管理 3)创建NFS provisioner,一是在NFS共享目录下创建挂载点(volume),二是建了PV并将PV与NFS的挂载点建立关联
我们来看下具体操作步骤
创建account及相关权限rbac.yaml:
kubectl apply -f rbac.yaml cat rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
创建并查看
kubectl apply -f kubectl apply -f rbac.yaml [root@k8s-master ~]# kubectl get sa NAME SECRETS AGE default 1 14d nfs-client-provisioner 1 29m
创建NFS资源的StorageClass
kubectl apply -f nfs-StorageClass.yaml cat nfs-StorageClass.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: managed-nfs-storage provisioner: nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致 reclaimPolicy: Retain parameters: archiveOnDelete: "false" allowVolumeExpansion: true #允许扩容
创建并查看
kubectl apply -f nfs-StorageClass.yaml [root@k8s-master ~]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE managed-nfs-storage nfs-storage Delete Immediate false 24m
创建NFS provisioner,创建前我们需要在所有节点安装nfs client
#centos上安装命令 yum install nfs-utils -y #ubuntu上安装命令 apt install nfs-kernel-server #apt 会自动安装 nfs-common 、rpcbind 等13个软件包
上面依赖包安装好以后我们安装nfs的provisioner
kubectl apply -f nfs-provisioner.yaml cat nfs-provisioner.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: nfs-storage #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致 - name: NFS_SERVER value: 192.168.137.10 #NFS Server IP地址 - name: NFS_PATH value: /data/nas3 #NFS挂载卷 volumes: - name: nfs-client-root nfs: server: 192.168.137.10 #NFS Server IP地址 path: /data/nas3 #NFS 挂载卷
创建Deployment
kubectl apply -f nfs-provisioner.yaml
创建测试
在需要使用pv的工作负载中使用添加如下配置(spec.template同层级)
volumeClaimTemplates: - metadata: name: nfs-data spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi storageClassName: managed-nfs-storage
如果部署完pvc一直pendding,检查describe提示如下:
persistentvolume-controller waiting for a volume to be created, either by ext
找到/etc/kubernetes/manifests/kube-apiserver.yaml文件,打开修改
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key #这行,在下面添加 - --feature-gates=RemoveSelfLink=false
然后等待apiserver自动重启,再次查看就正常了,目前该方法只适用于k8s 1.21.0以下版本。