内容简介:If you have been following this DevOps series you might remember that cool banana-sized kubernetes cluster we built inPart 4 with some Raspberry Pi boards. It was a great way to build a fully functional setup from scratch and also learn a lot in the proces
If you have been following this DevOps series you might remember that cool banana-sized kubernetes cluster we built inPart 4 with some Raspberry Pi boards. It was a great way to build a fully functional setup from scratch and also learn a lot in the process.
However, since then things have evolved. And now, instead of just k8s, we also have k3s (which, just judging by the name, must include at least 5 things less). k3s is an easy-to-install, lightweight but fully-compliant, kubernetes distribution (40MB single binary and 512MB RAM) optimized for ARM architectures… like our RPi setup. It does not include several heavy components that might not be really necessary in a common setup, like legacy features, embedded plugins, and other things like… Docker. Yes, you read well. It does not include Docker. What!?! Well, it includes a better option: a low-level component called containerd , much lighter than Docker.
Sounds like a great option for our small cluster, right? Time to get our hands dirty!
Ketchup for kubernetes
To make installation as simple and quick as possible we will use a tool called k3sup (read ketchup). So, let’s get started by running the following steps from your workstation.
First you need to install k3sup:
curl -sLS https://get.k3sup.dev | sh
Then, from your workstation, you can install k3s in your master RPi node (ie. the one with IP 192.168.1.100):
k3sup install --ip 192.168.1.100 --user pi
Save the kubeconfig file in your local directory, and start using it (please make sure you specify the complete path to the file):
export KUBECONFIG=~/Downloads/kubeconfig
In (literally) less than a minute you should be able to see your kubernetes master node up and ready:
kubectl get nodes
Wow, that was quick, huh?
Let’s now configure the rest of RPi boards as worker nodes, by specifying their IP addresses (192.168.1.101-103) and the master node IP address (192.168.1.100):
k3sup join --ip 192.168.1.101 --server-ip 192.168.1.100 --user pi k3sup join --ip 192.168.1.102 --server-ip 192.168.1.100 --user pi k3sup join --ip 192.168.1.103 --server-ip 192.168.1.100 --user pi
Again, in less than a minute you should see all of them up and running:
kubectl get nodes
That’s all! If you are a fast typist you can go from zero to a fully-configured and ready-to-use kubernetes cluster in just a few minutes.
THIS has to be the definition of “automagical”… so cool!!!
On top of it, k3s also includes traefik installed by default, so you don’t need to install a bare-metal load-balancer, nor an ingress controller. Everything is included and ready for you to use!
From Heapster to Metrics Server
Something else that changed during the last year is that the solution we used for monitoring cluster resource usage (Heapster) has been deprecated. A suitable replacement is Kubernetes Metrics server , a cluster-wide aggregator of resource usage data. It provides access to CPU & RAM usage per node and per pod, via CLI and API.
To install it please clone the required repo into your workstation:
git clone https://github.com/kubernetes-incubator/metrics-server.git
Then edit the deployment file and replace the default image name with the appropriate ARM one (k8s.gcr.io/metrics-server-arm:v0.3.2):
vi metrics-server/deploy/1.8+/metrics-server-deployment.yaml
You are now ready to apply the required Metrics Server manifests:
kubectl apply -f metrics-server/deploy/1.8+ -n kube-system
Once the pod is ready you will be able access resource usage info via CLI:
kubectl top node kubectl top pod
Or you can also browse its API, as you would with any other kubernetes API:
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes | jq . kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods | jq .
(note: to get nicely formatted output you will need jq installed in your system, ie. brew install jq in your Mac)
Alternatively, if you would rather use HTTP to browse the API (ie. with curl or wget), you can always use kubectl proxy (reverse proxy to help with locating the API server and authentication):
kubectl proxy --port=8080 & curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/nodes curl http://localhost:8080/apis/metrics.k8s.io/v1beta1/pods
Ready to rock!
With k3sup you have been able to easily install k3s and get your kubernetes cluster ready in a matter of minutes! It is now ready to get some applications deployed in it, so please feel free to try it out with our classic example microservices-based application: myhero. You may find how to do it in my previous DevOps Part 4 blog post and associatedLearning Lab.
Our kubernetes cluster still looks as cool as ever, but now with k3s it has much better performance and can be fully configured from scratch in just minutes!
See you in my next post, stay tuned!
Any questions or comments please let me know in the comments section below, Twitter or LinkedIn .
Related resources:
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
营销三大算法
刘学林、刘逸春、张新春、王颖、余彬晶、刘锦炽、董少灵、沈逸超、王锐睿、孙静若 / 上海交通大学出版社 / 2018-1-31 / 88.00元
未来的营销应该是数字化的,即数字营销。以数据为本,用演算做根,数字营销能够演算生活的方方面面。在数字营销领域,市场的整个投入、产出带来什么东西?企业一定要狠清楚地知道,这是做数字营销的本质。数字营销和企业做生意的本质是一样的,目的都是以投入换取产出。 本书由正和岛数字营销部落编写,基于大量企业的案例与数据,提出了营销三大核心算法与一套全局营销系统,帮助企业CEO与营销人员科学化建立全局营销系......一起来看看 《营销三大算法》 这本书的介绍吧!