A first look at vSphere with Kubernetes in action

栏目: IT技术 · 发布时间: 4年前

内容简介:Disclaimer: “Like my earlier posts, I want to be clear, this post is based on a pre-GA version of theLet’s begin with a look at the inventory of my Supervisor Cluster. I have my 3 physical ESXi hosts which now behave as my Kubernetes worker nodes, I have m

A first look at vSphere with Kubernetes in action In my previous post on VCF 4.0, we looked at the steps involved in deploying vSphere with Kubernetes in a Workload Domain (WLD) . When we completed that step, we had rolled out the Supervisor Control Plane VMs, and installed the Spherelet components which allows our ESXi hosts to behave as Kubernetes worker nodes. Let’s now take a closer look at that configuration, and I will show you a few simple Kubernetes operations to get you started on the Supervisor Cluster in vSphere with Kubernetes .

Disclaimer: “Like my earlier posts, I want to be clear, this post is based on a pre-GA version of the vSphere with Kubernetes . While the assumption is that not much should change between the time of writing and when the product becomes generally available, I want readers to be aware that feature behaviour and the user interface could still change before then.”

Supervisor Cluster Overview

Let’s begin with a look at the inventory of my Supervisor Cluster. I have my 3 physical ESXi hosts which now behave as my Kubernetes worker nodes, I have my 3 control plane virtual machines for running the Kubernetes API server and other core K8s components. I also have my NSX-T Edge cluster, deployed to my WLD via VCF 4.0 SDDC Manager .

A first look at vSphere with Kubernetes in action

Another interesting way to view this deployment is via the Kubernetes CLI command, kubectl . Let’s do that next. First, we need to find the Load Balancer IP address assigned to my Supervisor cluster. To find that, navigate to vSphere Client > Workload Management > Clusters . Here we will find the Control Plane Node IP address, highlighted below. This address has been allocated from the Ingress range that was configured during the NSX-T Edge deployment.

A first look at vSphere with Kubernetes in action

If we now point a browser at this IP address, the following landing page is displayed, which importantly for us, include the Kubernetes CLI Tools, kubectl and kubectl-vsphere .

A first look at vSphere with Kubernetes in action

Once these tools are downloaded to a desktop/workstation, we can use them to login to our Supervisor Cluster ‘context’, and query the cluster details.

$ ./kubectl-vsphere login --vsphere-username administrator@vsphere.local --server=20.0.0.1 --insecure-skip-tls-verify

Password: *********
Logged in successfully.

You have access to the following contexts:
   20.0.0.1

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`

$ ./kubectl get nodes -o wide

NAME                               STATUS   ROLES    AGE    VERSION                    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                 KERNEL-VERSION      CONTAINER-RUNTIME
422b0e3d39560ed4ea84169e6b77d095   Ready    master   2d2h   v1.16.7-2+bfe512e5ddaaaa   10.244.0.132   <none>        VMware Photon OS/Linux   4.19.84-1.ph3-esx   docker://18.9.9
422b341edb277e79d5ec7da8b50bf31a   Ready    master   2d2h   v1.16.7-2+bfe512e5ddaaaa   10.244.0.130   <none>        VMware Photon OS/Linux   4.19.84-1.ph3-esx   docker://18.9.9
422ba68078f4aaf4c9ba2afb27d4e945   Ready    master   2d2h   v1.16.7-2+bfe512e5ddaaaa   10.244.0.131   <none>        VMware Photon OS/Linux   4.19.84-1.ph3-esx   docker://18.9.9
esxi-dell-g.rainpole.com           Ready    agent    2d2h   v1.16.7-sph-30923be        10.27.51.7     <none>        <unknown>                <unknown>           <unknown>
esxi-dell-j.rainpole.com           Ready    agent    2d2h   v1.16.7-sph-30923be        10.27.51.122   <none>        <unknown>                <unknown>           <unknown>
esxi-dell-l.rainpole.com           Ready    agent    2d2h   v1.16.7-sph-30923be        10.27.51.124   <none>        <unknown>                <unknown>           <unknown>

$

We can see the same 3 x Master VMs and 3 x Worker nodes seen from the vCenter inventory. Another thing we can do is query the namespaces that exist on the Supervisor Cluster, which is quite a bit different to native Kubernetes, and even Enterprise PKS.

$ ./kubectl get ns
NAME                      STATUS   AGE
default                   Active   2d3h
kube-node-lease           Active   2d3h
kube-public               Active   2d3h
kube-system               Active   2d3h
vmware-system-capw        Active   2d3h
vmware-system-csi         Active   2d3h
vmware-system-kubeimage   Active   2d3h
vmware-system-nsx         Active   2d3h
vmware-system-registry    Active   2d3h
vmware-system-tkg         Active   2d3h
vmware-system-ucs         Active   2d3h
vmware-system-vmop        Active   2d3h

$

And if you were so inclined, you could take a look at all of the currently running Pods on the Supervisor Cluster by running kubectl get pods -A .

Creating our first Namespace

Let’s head back to the vSphere Client, and navigate to Workload Management > Namespaces . This will take us to the following landing page where we can create our first namespace.

A first look at vSphere with Kubernetes in action

A namespace here is simply a way of dividing the resources of the cluster between multiple consumers. For all intents and purpose, we can look at namespaces on the Supervisor Cluster as being very similar to vSphere Resource Pools. When creating the namespace you need to choose a Cluster object from the inventor, and then provide a name and optional description. In this example, I have simply called it cormac-ns .

A first look at vSphere with Kubernetes in action

On successfully creating the namespace, you will be placed into the following view in the vSphere Client. The Status window tells us that the namespace was created successfully. It also has some inventory information, as well as a link to the tools page, which we have already visited. Capacity and Usage allows us to edit CPU, Memory and Storage limits for the namespace. Tanzu Kubernetes is only applicable when we deploy a Tanzu Kubernetes Grid (TKG) guest cluster. We will revisit this in an upcoming post.

A first look at vSphere with Kubernetes in action

I think much of the settings and information displayed here is quite straight-forward to digest. I will mentioned Storage however. You will need to ‘Add Storage’ to the namespace by selecting a Storage Policy from the available list of policies created on this vSphere environment. I am going to keep things simple by selecting the default vSAN storage policy, but of course you can get more granular, depending on the number of hosts that are in the cluster, as well as the data services that you have enabled on the vSAN cluster.

A first look at vSphere with Kubernetes in action

After the storage policy (or policies) has been assigned to the namespace, it becomes available as a Kubernetes Storage Class in our namespace. Let’s return to the kubectl CLI and demonstrate this. First, we will see that our new namespace ( cormac-ns ) is created. Then we will logout and log back in to the Supervisor Cluster to pick up our new ‘context’. Note that there may be other ways to do this, but this is the way that I found to work for me. When we log back in, we can see that the cormac-ns namespace is already set as the context ( * against it), so no need to change the context after logging back in.

$ kubectl get ns
NAME                      STATUS   AGE
cormac-ns                 Active   15m
default                   Active   2d3h
kube-node-lease           Active   2d3h
kube-public               Active   2d3h
kube-system               Active   2d3h
vmware-system-capw        Active   2d3h
vmware-system-csi         Active   2d3h
vmware-system-kubeimage   Active   2d3h
vmware-system-nsx         Active   2d3h
vmware-system-registry    Active   2d3h
vmware-system-tkg         Active   2d3h
vmware-system-ucs         Active   2d3h
vmware-system-vmop        Active   2d3h


$ kubectl config get-contexts
CURRENT   NAME       CLUSTER    AUTHINFO                                   NAMESPACE
*         20.0.0.1   20.0.0.1   wcp:20.0.0.1:administrator@vsphere.local


$ ./kubectl-vsphere logout
Your KUBECONFIG context has changed.
The current KUBECONFIG context is unset.
To change context, use `kubectl config use-context <workload name>`
Logged out of all vSphere namespaces.


$ ./kubectl-vsphere login --vsphere-username administrator@vsphere.local --server=20.0.0.1 --insecure-skip-tls-verify

Password:
Logged in successfully.

You have access to the following contexts:
   20.0.0.1
   cormac-ns

If the context you wish to use is not in this list, you may need to try
logging in again later, or contact your cluster administrator.

To change context, use `kubectl config use-context <workload name>`


$ ./kubectl config get-contexts
CURRENT   NAME        CLUSTER    AUTHINFO                                   NAMESPACE
          20.0.0.1    20.0.0.1   wcp:20.0.0.1:administrator@vsphere.local
*         cormac-ns   20.0.0.1   wcp:20.0.0.1:administrator@vsphere.local   cormac-ns


$ ./kubectl get sc
NAME                          PROVISIONER              AGE
vsan-default-storage-policy   csi.vsphere.vmware.com   2d2h

$

With the final command above, displaying of Storage Classes, we can see that the default vSAN storage policy is now available for use by Persistent Volumes in the Supervisor Cluster. The Storage Window in the Namespace Summary in the vSphere UI will report the number of Persistent Volume Claims that exist in the namespace. We will see this shortly.

We will leave the namespace for the present and do one more action. We will now enable the Harbor Image Registry on the Supervisor Cluster. Once that is enabled, we will push an image up to the Image Registry and use it to deploy our first application on the Supervisor Cluster.

Enable Harbor Image Registry

A really neat feature of the Supervisor Cluster is that it includes an embedded Harbor Image Registry for container images. To enable the Image Registry, select the Cluster object in the vCenter inventory then navigate to Configure > Namespaces > Image Registry as shown below:

A first look at vSphere with Kubernetes in action

Click on the Enable Harbor button. It prompts you to choose a Storage Policy for the persistent volumes required by the Harbor application (I chose vSAN default once again). Immediately a new namespace is created called vmware-system-registry, and this is where the PodVMs that back the Harbor application are deployed. There are 7 PodVMs  created in total, and when deployment completes, a link to the Harbor UI is provided.

A first look at vSphere with Kubernetes in action

And because the Cloud Native Storage (CNS) is fully integrated with the Supervisor Cluster, we can see the 4 x Persistent Volumes that were deployed on behalf of the Harbor application. Pretty cool, huh?

A first look at vSphere with Kubernetes in action

Let’s also take a look at the Harbor Image Registry namespace as that has some interesting info now.

A first look at vSphere with Kubernetes in action

We can see the Storage window now report 4 PVCs, as well as a Storage Policy (vSAN default) with a 200GB limit. The Capacity and Usage is also showing some usage. If we click on the EDIT LIMITS and expand the Storage, we see that although there is no overall Storage Limit, the amount of storage that can be consumed by the vSAN default storage policy is 200GB.

A first look at vSphere with Kubernetes in action

The last item to point out is that there are now 7 running Pods (blue line). Some Pods were in a state of Pending (yellow line), but now they are running – this is normal. There are no failed Pods (red line).

Push an Image to Harbor

The next step is to push an image to the Harbor Image Registry. The first step is to establish trust. To do that, you will need to login to Harbor UI using SSO credentials, and get the Registry Certificate from the Repository, as shown below.

A first look at vSphere with Kubernetes in action

This certificate will then have to be copied to your docker certificates location on your local laptop/desktop/workstation. On my Ubuntu 17.10, this location is in /etc/docker/certs.d . You then create a directory to match the FQDN or IP address of your Harbor Registry and place the downloaded registry cert there. Otherwise you will hit x509 errors try to communicate to the registry.

With the cert in place, we can pull down an image from an internet repository, tag it and push it to our Harbor registry. The push command format is registry/project/repository:tag. So my Harbor registry is 20.0.0.2 (another Ingress IP address provided by NSX-T). My project is cormac-ns, created automatically for every namespace in the Supervisor Cluster. The repository is busybox and since no tag is provide, :latest is used.

$ docker pull k8s.gcr.io/busybox
Using default tag: latest
latest: Pulling from busybox
a3ed95caeb02: Pull complete
138cfc514ce4: Pull complete
Digest: sha256:d8d3bc2c183ed2f9f10e7258f84971202325ee6011ba137112e01e30f206de67
Status: Downloaded newer image for k8s.gcr.io/busybox:latest
k8s.gcr.io/busybox:latest


$ docker tag k8s.gcr.io/busybox 20.0.0.2/cormac-ns/busybox


$ docker push 20.0.0.2/cormac-ns/busybox
The push refers to repository [20.0.0.2/cormac-ns/busybox]
5f70bf18a086: Pushed
44c2569c4504: Pushed
latest: digest: sha256:d2af0ba9eb4c9ec7b138f3989d9bb0c9651c92831465eae281430e2b254afe0d size: 1146
$

The image is now stored in our Harbor Image Registry and can be referenced by any manifest YAML files that we use to deploy apps on the Supervisor Cluster.

A first look at vSphere with Kubernetes in action

Create our first StatefulSet

Just to complete this first exercise with vSphere with Kubernetes , I’ll create a small application in my own namespace. I will download the image, push it to Harbor and then use it within my own manifest file. The application will be my trusty Cassandra StatefulSet application that I’ve used many times before. You’ve seen how to do the push and pull to Harbor, so I won’t repeat the steps here. Instead, I will simple create the service and the 3 x replica StatefulSet for the application and query the StatefulSet, including its Service, Pods and PVCs, after deployment.

Let’s use kubectl to query this StatefulSet after it has been deployed:

$ ./bin/kubectl get sts
NAME        READY   AGE
cassandra   3/3     6m57s

This is the Service:

$ ./bin/kubectl get svc
NAME        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
cassandra   ClusterIP   10.96.0.35   <none>        9042/TCP   8m38s

These are the Pods:

$ ./bin/kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
cassandra-0   1/1     Running   0          6m56s
cassandra-1   1/1     Running   0          6m8s
cassandra-2   1/1     Running   0          91s

And finally the PVCs, which also appear in CNS of course:

$ ./bin/kubectl get pvc
NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
cassandra-data-cassandra-0   Bound    pvc-019d8f46-ccc8-402c-a038-6acd36abd103   1Gi        RWO            vsan-default-storage-policy   7m46s
cassandra-data-cassandra-1   Bound    pvc-3143a4d4-0d23-482a-8720-5e2912cd3e33   1Gi        RWO            vsan-default-storage-policy   6m58s
cassandra-data-cassandra-2   Bound    pvc-a4c5e528-7868-46d0-9df0-b5304ed90925   1Gi        RWO            vsan-default-storage-policy   3m7s

Here are the volumes in CNS, where I filtered on the app:cassandra , so we do not display the Harbor volumes that we already looked at earlier:

A first look at vSphere with Kubernetes in action

Let’s now return to the Namespace view in vSphere Client.  We can now see some more interesting details around Storage , Capacity and Usage , and Pods . There are now 3 running Pods (I requested 3 replicas in my StatefulSet manifest) along with 3 Persistent Volume Claims, which we saw in the CNS view previously.

A first look at vSphere with Kubernetes in action

Pretty neat. OK – I hope this has given you a reasonable appreciation of some of the things that can be done in the Supervisor Cluster. Check back through my previous VCF 4.0 posts if you want to see how easy it was to deploy it in the first place. Stay tuned, and we will take a close look at the Tanzu Kubernetes Grid guest cluster, deployed in a namespace, in a future post.


以上所述就是小编给大家介绍的《A first look at vSphere with Kubernetes in action》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

JSP信息系统开发实例精选

JSP信息系统开发实例精选

赛奎春 / 机械工业出版社 / 2006-1 / 44.00元

本书精选了大学生就业求职网、物流短信平台、化奥汽车销售集团网站、佳美网络购物中心、科研成果申报管理系统、安瑞奥国际商务不院招生网、明日宽带影院、雄霸天下游戏网等8个综合的网络信息系统工程作为案例,深入剖析了实际的网络信息系统的开发思路、方法和技巧。帮助读者透彻掌握JSP开发网络信息系统的方法和步骤,从而设计出具有实用价值和商用价值的信息系统。   本书产例具有很强的实用性和工程实践性,在讲解......一起来看看 《JSP信息系统开发实例精选》 这本书的介绍吧!

CSS 压缩/解压工具
CSS 压缩/解压工具

在线压缩/解压 CSS 代码

HTML 编码/解码
HTML 编码/解码

HTML 编码/解码

XML 在线格式化
XML 在线格式化

在线 XML 格式化压缩工具