Static Persistent Volumes and Cloud Native Storage

栏目: IT技术 · 发布时间: 4年前

内容简介:I am going to use a file-based (NFS) volume for this “static” PV test. Note that there are two ways of provisioning a static file-based volumes. The first is to use the in-tree NFS driver. These are not considered CSI persistent volumes, and so willYou can

Static Persistent Volumes and Cloud Native Storage Recently I was asked if “statically” provisioned persistent volumes (PVs) in Kubernetes would be handled by Cloud Native Storage (CNS) in vSphere 7.0 and in turn appear in the vSphere client, just like a dynamically provisioned persistent volume . The short answer is yes, this is supported and works. The details on how to do this are shown here in this post.

I am going to use a file-based (NFS) volume for this “static” PV test. Note that there are two ways of provisioning a static file-based volumes. The first is to use the in-tree NFS driver. These are not considered CSI persistent volumes, and so will not appear in CNS. The second method is to use the vSphere CSI driver, which also has the ability to bubble the volume up to the CNS and the vSphere client UI. Let’s look at both options.

In-tree NFS driver (no CNS interop)

You can use the following manifest to test the difference between in-tree NFS and out-of-tree CSI. Here are a set of manifest files that can be used to statically provision an NFS based persistent volume using the in-tree NFS and have it mounted to a Pod. These are YAML files for a Pod, PVC and PV. The Pod runs busybox and mounts the NFS volume to /nfs.

apiVersion: v1
kind: Pod
metadata:
  name: nfs-client-pod
  namespace: nfs-static
spec:
  containers:
  - name: busybox
    image: "k8s.gcr.io/busybox"
    volumeMounts:
    - name: nfs-vol
      mountPath: "/nfs"
    command: [ "sleep", "1000000" ]
  volumes:
    - name: nfs-vol
      persistentVolumeClaim:
        claimName: nfs-client-pvc


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-client-pvc
  namespace: nfs-static
spec:
  storageClassName: nfs-client-sc
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi


apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-client-pv
  namespace: nfs-static
spec:
  storageClassName: nfs-client-sc
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: "10.27.51.214"
    path: "/static-pv-test"

Of interest here are the references to the StorageClassName in both the PV and PVC. These simply form a relationship between the PVC and the PV. In the PersistentVolume YAML, you can also see the in-tree NFS references with both the server and the path to the volume.

However, after creating this volume, it does not appear in the vSphere UI. To do that, we must use the CSI driver when attaching and mounting statically provisioned volumes.

Out-of-tree CSI driver with CNS

To begin, we need to find which file share that we wish to mount to our Pod. I am going to use an existing read-write-many (RWX) file share created on vSAN File Services. Note how the file share is represented with a folder icon:

Static Persistent Volumes and Cloud Native Storage

From this output, I can extract the UUID of the file share. This is used to reference the share, rather than using the server and path details seen in the in-tree NFS driver. Here are the manifest files used to add a statically provisioned file share to a Pod, and also have it appear in CNS.

apiVersion: v1
kind: Pod
metadata:
  name: nfs-client-pod-csi
spec:
  containers:
  - name: busybox
    image: "k8s.gcr.io/busybox"
    volumeMounts:
    - name: nfs-vol-csi
      mount Path: "/nfs"
    command: [ "sleep", "1000000" ]
  volumes:
    - name: nfs-vol-csi
      persistentVolumeClaim:
        claimName: static-pvc-csi


apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: static-pvc-csi
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  selector:
    matchLabels:
      static-pv-label-key: static-pv-label-value
  storageClassName: ""


apiVersion: v1
kind: PersistentVolume
metadata:
  name: static-pv-csi
  annotations:
    pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
  labels:
    "static-pv-label-key": "static-pv-label-value"
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Delete
  csi:
    driver: "csi.vsphere.vmware.com"
    fsType: nfs4
    volumeAttributes:
      type: "vSphere CNS File Volume"
     "volumeHandle": "file:26ca8a57-2ec1-46c9-9baf-06d409abb293"

There is not much different in the Pod manifest used for the CSI approach when compared to the in-tree NFS driver, apart from a few name changes.

The PersistentVolumeClaim manifest is a bit different, in so far as it is now using a selector with matchLabels rather than StorageClassName to tie it to the PersistentVolume . This is just an alternative way of binding the PVC to the PV.

The PersistentVolume manifest is quite a bit different when it comes to the spec . There are new metadata annotations and labels to bind it with the PVC. However, the in-tree NFS has now been replaced with the out-of-tree CSI. Here we can see a reference to the CSI driver, the filesystem type, some volume attributes and a volume handle. The volumeHandle matches the UUID we retrieved from vSAN File Services earlier.

If we now go ahead and deploy this application (PV, PVC and Pod), we can see the volume appear in CNS.

$ kubectl get pvc
NAME             STATUS   VOLUME          CAPACITY   ACCESS MODES   STORAGECLASS   AGE
static-pvc-csi   Bound    static-pv-csi   1Gi        RWX                           11m


$ kubectl get pv
NAME            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
static-pv-csi   1Gi        RWX            Delete           Bound    default/static-pvc-csi                           11m


$ kubectl get pods
NAME                 READY   STATUS    RESTARTS   AGE
nfs-client-pod-csi   1/1     Running   0          58s

Let’s revisit the vSAN File Services file share view. Notice how the volume has changed from a file share to a container volume, represented by a disk icon rather than a folder icon.

Static Persistent Volumes and Cloud Native Storage

And we can also see this container volume in the Container view.

Static Persistent Volumes and Cloud Native Storage

Click on the details view to see more information about the PV. CNS is now providing detailed information about the statically provisioned PV.

Static Persistent Volumes and Cloud Native Storage

To conclude, statically provisioned NFS volumes are fully supported by CNS if the CSI driver is used to provision them rather than the in-tree NFS driver. Do note that the CSI scope is within a single vCenter server. Thus there is no way to make a statically provisioned NFS based persistent volume available to a different Kubernetes cluster that resides on different vSphere infrastructure managed by a different vCenter server at this time. The “ volumeHandle ” reference in the PersistentVolume manifest would not be known by CNS on that other vCenter. An alternative option is to use the in-tree NFS driver if there is a requirement to do this. If you do have a requirement to have this cross-mount scenario handled by CNS-CSI, please let me know. We are always interested in learning more about these use-cases.


以上所述就是小编给大家介绍的《Static Persistent Volumes and Cloud Native Storage》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

ppk谈JavaScript

ppk谈JavaScript

Peter-Paul Koch / 淘宝UED / 人民邮电出版社 / 2008-4 / 59.00元

本书全方位介绍了JavaScript,主要讨论了浏览器兼容性、可访问性、底层语法以及与HTML结构层的协同等问题。书中既包括理论性的讲解,又给出了相关的示例脚本以进行进一步阐述。通过8个真实项目示例,介绍了JavaScript核心语言、BOM、事件处理、DOM、修改CSS样式表以及数据检索等内容。 本书适合具有一定网页开发经验的Web开发人员阅读。一起来看看 《ppk谈JavaScript》 这本书的介绍吧!

RGB转16进制工具
RGB转16进制工具

RGB HEX 互转工具

URL 编码/解码
URL 编码/解码

URL 编码/解码

HEX HSV 转换工具
HEX HSV 转换工具

HEX HSV 互换工具