vSphere 7.0, Cloud Native Storage, CSI and offline volume extend

栏目: IT技术 · 发布时间: 4年前

内容简介:To enable resizing operations, a new component has been added to the vSphere CSI Controller calledNote that there are 6 containers associated with my vSphere CSI controller Pod. While there are ways of listing all containers in a Pod,

vSphere 7.0, Cloud Native Storage, CSI and offline volume extend Another new feature added to the vSphere CSI driver in the vSphere 7.0 release is the ability to offline extend / grow a Kubernetes Persistent Volume (PV). This requires a special directive to be added to the StorageClass and, as per the title, the operation must be done offline whilst the PV is detached from any Pod. Let’s take a closer look at the steps involved.

New CSI component – CSI Resizer

To enable resizing operations, a new component has been added to the vSphere CSI Controller called csi-resizer . We can examine the csi-resizer and other components associated with the CSI driver as follows. First, list all the pods currently running in the kube-system namespace:

$ kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-7988585f4-9dvgs                   1/1     Running   0          70m
coredns-7988585f4-kfwd7                   1/1     Running   0          70m
etcd-k8s2-master7-01                      1/1     Running   0          84m
kube-apiserver-k8s2-master7-01            1/1     Running   0          84m
kube-controller-manager-k8s2-master7-01   1/1     Running   0          83m
kube-flannel-ds-amd64-44wmm               1/1     Running   0          81m
kube-flannel-ds-amd64-68frg               1/1     Running   0          73m
kube-flannel-ds-amd64-8xk27               1/1     Running   0          75m
kube-flannel-ds-amd64-k5r6s               1/1     Running   0          78m
kube-flannel-ds-amd64-wwr2m               1/1     Running   0          82m
kube-flannel-ds-amd64-xcfxw               1/1     Running   0          77m
kube-proxy-5s4gr                          1/1     Running   0          73m
kube-proxy-8qh27                          1/1     Running   0          84m
kube-proxy-8tj2c                          1/1     Running   0          75m
kube-proxy-jzqgg                          1/1     Running   0          77m
kube-proxy-r9jk9                          1/1     Running   0          78m
kube-proxy-xjvth                          1/1     Running   0          81m
kube-scheduler-k8s2-master7-01            1/1     Running   0          83m
vsphere-cloud-controller-manager-4bsx4    1/1     Running   0          72m
vsphere-csi-controller-5864fc6f8b-2qmfb   6/6     Running   0          65m
vsphere-csi-node-k6mrt                    3/3     Running   0          65m
vsphere-csi-node-kpdg4                    3/3     Running   0          65m
vsphere-csi-node-pr9f4                    3/3     Running   0          65m
vsphere-csi-node-qh967                    3/3     Running   0          65m
vsphere-csi-node-qsbc8                    3/3     Running   0          65m
vsphere-csi-node-wsnxh                    3/3     Running   0          65m

Note that there are 6 containers associated with my vSphere CSI controller Pod. While there are ways of listing all containers in a Pod, it is a complex command so I simply run a command to display the logs from the Pod and it returns the list of containers to me (since the logs are displayed on a per container basis).

$ kubectl logs vsphere-csi-controller-5864fc6f8b-2qmfb -n kube-system
Error from server (BadRequest): a container name must be specified for pod vsphere-csi-controller-5864fc6f8b-2qmfb, \
choose one of: [csi-attacher csi-resizer vsphere-csi-controller liveness-probe vsphere-syncer csi-provisioner]

And if I want to see logs from the csi-resizer , I can simply run this command:

$ kubectl logs vsphere-csi-controller-5864fc6f8b-2qmfb -n kube-system csi-resizer
I0421 09:51:51.378212       1 main.go:61] Version : v0.3.0-0-g150071d
I0421 09:51:51.380618       1 connection.go:151] Connecting to unix:///csi/csi.sock
I0421 09:51:58.538323       1 common.go:111] Probing CSI driver for readiness
I0421 09:51:58.543451       1 leaderelection.go:217] attempting to acquire leader lease  kube-system/external-resizer-csi-vsphere-vmware-com...
I0421 09:51:58.564086       1 leaderelection.go:227] successfully acquired lease kube-system/external-resizer-csi-vsphere-vmware-com
I0421 09:51:58.564323       1 leader_election.go:172] new leader detected, current leader: vsphere-csi-controller-5864fc6f8b-2qmfb
I0421 09:51:58.564862       1 leader_election.go:165] became leader, starting
I0421 09:51:58.564900       1 controller.go:189] Starting external resizer csi.vsphere.vmware.com
I0421 09:51:58.565075       1 reflector.go:123] Starting reflector *v1.PersistentVolumeClaim (10m0s) from k8s.io/client-go/informers/factory.go:133
I0421 09:51:58.565092       1 reflector.go:161] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:133
I0421 09:51:58.565359       1 reflector.go:123] Starting reflector *v1.PersistentVolume (10m0s) from k8s.io/client-go/informers/factory.go:133
I0421 09:51:58.565369       1 reflector.go:161] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:133
I0421 09:51:58.665054       1 shared_informer.go:123] caches populated

Let’s proceed with the offline volume grow test and we will return to the csi-resizer logs from time to time.

StorageClass allowVolumeExpansion

To facilitate volume extend / grow, a new entry is required in the StorageClass manifest. This is allowVolumeExpansion and must be set to true . If it is not set, and you make an attempt to grow the volume, you will get the following error when you attempt to apply an updated PVC manifest with a larger volume size:

persistentvolumeclaims "name-of-pvc" is forbidden: only dynamically provisioned pvc can be resized \
and the storageclass that provisions the pvc must support resize

Here is a sample StorageClass which includes the new entry:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: vsan-block-sc
provisioner: csi.vsphere.vmware.com
allowVolumeExpansion: true
parameters:
  storagepolicyname: "RAID1"

Detach Volume from Pod

As mentioned, this is an offline grow method. Therefore the persistent volume must not be attached to a Pod if the grow operation is to succeed. If you attempt to grow a volume that is still attached and mounted to a Pod, the following errors are shown in the csi-resizer logs:

$ kubectl logs vsphere-csi-controller-5864fc6f8b-2qmfb -n kube-system csi-resizer
.
.
I0421 10:29:18.671919       1 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", \
Namespace:"default", Name:"block-pvc", UID:"0856ea65-6252-429c-9444-2f315fe13d3a", APIVersion:"v1", \
ResourceVersion:"10204", FieldPath:""}): type: 'Normal' reason: 'Resizing' External resizer is \
resizing volume pvc-0856ea65-6252-429c-9444-2f315fe13d3a
E0421 10:29:19.034781       1 controller.go:360] Resize volume "pvc-0856ea65-6252-429c-9444-2f315fe13d3a" \
by resizer "csi.vsphere.vmware.com" failed: rpc error: code = FailedPrecondition desc = failed \
to expand volume: "0c561180-b0da-4dc3-90c4-2227811f4080" to size: 2048. Volume is attached to node \
"42051f46-6ac5-3b3a-502b-8242b0325b9d". Only offline volume expansion is supported
.

Unfortunately there is not yet a way to detect (100% reliably) if a PV is attached to a Pod, so there is no validation hook to stop you from trying to grow a volume that is still attached in this version of the driver. If the volume is still attached to a Pod, Kubernetes will continuously try to grow it in the background until it is finally detached from all Pods and can then be extended.

In my example, I simply deleted the Pod before attempting the extend / grow operation. Note that this does not impact the contents of the PV. The data on the volume is preserved during the extend / grow operation.

Initiate the extend / grow operation

To initiate the grow / extend operation, the step is very simple. You adjust the requested storage size in the PVC manifest to the new desired size. Here is my sample YAML from the initial setup and then the new requested size, going from 1Gi to 5Gi .

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: block-pvc
spec:
  storageClassName: vsan-block-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage:1Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: block-pvc
spec:
  storageClassName: vsan-block-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage:5Gi

Apply the new PVC and the size of the Persistent Volume will change on the back-end storage.

$ kubectl apply -f block-pvc.yaml
persistentvolumeclaim/block-pvc configured


$ kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
block-pvc   Bound    pvc-0856ea65-6252-429c-9444-2f315fe13d3a   1Gi        RWO            vsan-block-sc   34m


$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS    REASON   AGE
pvc-0856ea65-6252-429c-9444-2f315fe13d3a   5Gi        RWO            Delete           Bound    default/block-pvc   vsan-block-sc            34m

Reattach Volume to Pod

Interestingly, the PVC is still showing 1Gi. This value will be updated as soon as the PV is reattached to a Pod once more. Let’s do that next. We can even open a shell on the busybox container and examine the size of the volume from there to make sure it has indeed grown.

$ kubectl apply -f block-pod-a.yaml
pod/block-pod-a created


$ kubectl get pvc
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
block-pvc   Bound    pvc-0856ea65-6252-429c-9444-2f315fe13d3a   5Gi        RWO            vsan-block-sc   36m


$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               STORAGECLASS    REASON   AGE
pvc-0856ea65-6252-429c-9444-2f315fe13d3a   5Gi        RWO            Delete           Bound    default/block-pvc   vsan-block-sc            36m


$ kubectl exec -it block-pod-a -- /bin/sh
/ # df -h
Filesystem                Size      Used Available Use% Mounted on
overlay                  58.0G      2.8G     55.2G   5% /
tmpfs                    64.0M         0     64.0M   0% /dev
tmpfs                    15.7G         0     15.7G   0% /sys/fs/cgroup
/dev/sda1                58.0G      2.8G     55.2G   5% /dev/termination-log
/dev/sdb                  4.9G      4.0M      4.7G   0% /mnt/volume1
/dev/sda1                58.0G      2.8G     55.2G   5% /etc/resolv.conf
/dev/sda1                58.0G      2.8G     55.2G   5% /etc/hostname
/dev/sda1                58.0G      2.8G     55.2G   5% /etc/hosts
shm                      64.0M         0     64.0M   0% /dev/shm
tmpfs                    15.7G     12.0K     15.7G   0% /tmp/secrets/kubernetes.io/serviceaccount
tmpfs                    15.7G         0     15.7G   0% /proc/acpi
tmpfs                    64.0M         0     64.0M   0% /proc/kcore
tmpfs                    64.0M         0     64.0M   0% /proc/keys
tmpfs                    64.0M         0     64.0M   0% /proc/timer_list
tmpfs                    64.0M         0     64.0M   0% /proc/sched_debug
tmpfs                    15.7G         0     15.7G   0% /proc/scsi
tmpfs                    15.7G         0     15.7G   0% /sys/firmware
/ #

And now we can clearly see that the volume has grown. Events related to the grow operations are also clearly visible in the vSphere UI tasks:

vSphere 7.0, Cloud Native Storage, CSI and offline volume extend

The PV size is also reflected in the CNS section of the vSphere UI.

vSphere 7.0, Cloud Native Storage, CSI and offline volume extend

So, another nice new feature in the vSphere CSI driver with vSphere 7.0. To recap, other new CSI features include:

  1. CSI Interoperability with vSAN File Services to facilitate the dynamic provisioning of read-wrote-many (RWX) persistent volumes as file shares.
  2. CSI support forVirtual Volumes (vVols)
  3. CSI support forVM Encryption (VMcrypt)
  4. CSI support for vSphere with Kubernetes , formerly known as Project Pacific

以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

游戏化实战

游戏化实战

[美]Yu-kai Chou / 杨国庆 / 华中科技大学出版社 / 2017-1 / 59.00

TED演讲人作品,罗辑思维、华为首席用户体验架构师、思科网络体验CTO推荐。 随书附有TED演讲中文视频及作者开设的游戏化初学者课程。作者为Google、乐高、华为、思科、斯坦福大学、丹麦创新中心等多家企业、机构提供高层培训与合作。 ********************** “我长期以来都在密切关注Yu-kai的研究成果。任何想要让工作、生活变美好的人都应该阅读这本书。” ......一起来看看 《游戏化实战》 这本书的介绍吧!

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

Base64 编码/解码
Base64 编码/解码

Base64 编码/解码

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具