Deploying the Percona Kubernetes Operator for XtraDB Cluster in Amazon (AWS)

栏目: IT技术 · 发布时间: 4年前

内容简介:Like every other major cloud vendor, Amazon also has its own service available to make easy the task of creating and maintaining a K8S cluster calledThere is more than one way to get kubectl. We are going to install the binary hosted by

Deploying the Percona Kubernetes Operator for XtraDB Cluster in Amazon (AWS) Being that Amazon is one of the most-used cloud vendors, it is only natural that one may ask “ How can Kubernetes be used in AWS? “. And the answer is – not that different than with other cloud vendors. What one needs is two things (and this applies universally): a Kubernetes cluster + the Percona XtraDB Cluster (PXC). Let’s start by creating the K8S cluster.

Amazon EKS

Like every other major cloud vendor, Amazon also has its own service available to make easy the task of creating and maintaining a K8S cluster called Amazon Elastic Kubernetes Service (EKS) . There are two ways to create the cluster: one is using a tool called eksctl (which is the one we are going to use) and the other one is using the AWS management console which is a more manual approach. Now, before deploying the cluster with eksctl, there are a few requirements that need to be met:

  • Have kubectl installed
  • Have the latest AWS CLI installed
  • Have AWS IAM authenticator
  • And, of course, have eksctl installed

Installing kubectl

There is more than one way to get kubectl. We are going to install the binary hosted by Amazon (compatible with the upstream version). The following steps are for Linux:

curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/<code>bin/linux/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin

Once that is done, you can verify that the installation was done properly by asking for the version: kubectl version –short –client .

[root@ip-192-168-1-239 ~]# kubectl version --short --client
Client Version: v1.14.7-eks-1861c5

All good!

Installing the AWS CLI

To get the new (experimental) AWS CLI version 2 , run:

curl "https://d1vvhvl2y92vvt.cloudfront.net/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

Verifying:

[root@ip-192-168-1-239 ~]# /usr/local/bin/aws2 --version
aws-cli/2.0.0dev3 Python/3.7.3 Linux/3.10.0-1062.1.2.el7.x86_64 botocore/2.0.0dev2

You can export the /usr/local/bin path to the environment variable PATH so you can use the “aws2” command directly.

Installing AWS IAM Authenticator

Similar to the previous installations, just run the following commands as described in the AWS IAM authenticator documentation :

curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin

And validate:

[root@ip-192-168-1-239 ~]# aws-iam-authenticator help
A tool to authenticate to Kubernetes using AWS IAM credentials
 
Usage:
aws-iam-authenticator [command]
 
......

Don’t forget to configure your AWS CLI credentials, for example (not real info):

$ aws2 configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json

Installing eksctl

Similar instructions. Follow these steps:

curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin

And verify:

[root@ip-192-168-1-239 ~]# eksctl version
[ℹ] version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.12.0"}

Now we are ready to deploy the Kubernetes cluster.

Creating the Kubernetes Cluster

And now the moment of truth. To create the cluster, one just needs to execute one command (with several parameters), but that is pretty much all. For this case, the command looks like this:

eksctl create cluster \
--name percona1 \
--version 1.14 \
--region us-east-2 \
--nodegroup-name percona-standard-workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--ssh-access \
--ssh-public-key /root/.ssh/id_rsa.pub \
--managed

The parameters used are just a small subset of everything that is available, and one that can seen by running “eksctl create cluster –help” , but for this case what we asked of EKS is to create a cluster named Percona using K8S version 1.14, in the aws region us-east-2 (Ohio), giving a name of percona-standard-workers to the nodegroup, using t3.medium EC2 instances for the nodes, with a total of three nodes (min 1 max 4), and enabling SSH access for the nodes using the SSH public key provided.

Note that all these parameters can be passed using a config file with YAML format, as explained in the documentation . Now, after the command is executed, the cluster is ready to be deployed. This process is not fast and could take around 15 minutes to finish . Be patient.

The output will look like this:

[ℹ] eksctl version 0.12.0
[ℹ] using region us-east-2
[ℹ] setting availability zones to [us-east-2a us-east-2b us-east-2c]
[ℹ] subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for us-east-2b - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for us-east-2c - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] using SSH public key "/root/.ssh/id_rsa.pub" as "eksctl-percona1-nodegroup-percona-standard-workers-5e:8e:f6:14:2f:5a:f1:40:6f:33:e9:53:4a:13:c5:40"
[ℹ] using Kubernetes version 1.14
[ℹ] creating EKS cluster "percona1" in "us-east-2" region with managed nodes
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=percona1'
[ℹ] CloudWatch logging will not be enabled for cluster "percona1" in "us-east-2"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --region=us-east-2 --cluster=percona1'
[ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "percona1" in "us-east-2"
[ℹ] 2 sequential tasks: { create cluster control plane "percona1", create managed nodegroup "percona-standard-workers" }
[ℹ] building cluster stack "eksctl-percona1-cluster"
[ℹ] deploying stack "eksctl-percona1-cluster"
[ℹ] building managed nodegroup stack "eksctl-percona1-nodegroup-percona-standard-workers"
[ℹ] deploying stack "eksctl-percona1-nodegroup-percona-standard-workers"
[ℹ] all EKS cluster resources for "percona1" have been created
[ℹ] saved kubeconfig as "/root/.kube/config"
[ℹ] nodegroup "percona-standard-workers" has 3 node(s)
[ℹ] node "ip-192-168-17-143.us-east-2.compute.internal" is ready
[ℹ] node "ip-192-168-62-135.us-east-2.compute.internal" is ready
[ℹ] node "ip-192-168-86-219.us-east-2.compute.internal" is ready
[ℹ] waiting for at least 1 node(s) to become ready in "percona-standard-workers"
[ℹ] nodegroup "percona-standard-workers" has 3 node(s)
[ℹ] node "ip-192-168-17-143.us-east-2.compute.internal" is ready
[ℹ] node "ip-192-168-62-135.us-east-2.compute.internal" is ready
[ℹ] node "ip-192-168-86-219.us-east-2.compute.internal" is ready
[ℹ] kubectl command should work with "/root/.kube/config", try 'kubectl get nodes'
[ℹ] EKS cluster "percona1" in "us-east-2" region is ready
[root@ip-192-168-1-239 ~]#

You’ve got yourself a K8S cluster in AWS!

[root@ip-192-168-1-239 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-17-143.us-east-2.compute.internal Ready <none> 12m v1.14.7-eks-1861c5
ip-192-168-62-135.us-east-2.compute.internal Ready <none> 12m v1.14.7-eks-1861c5
ip-192-168-86-219.us-east-2.compute.internal Ready <none> 12m v1.14.7-eks-1861c5
[root@ip-192-168-1-239 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 16m

Now we can install the Percona XtraDB Cluster operator.

Deploying the Percona Kubernetes Operator for Percona XtraDB Cluster

One can follow the instructions described in the document Install Percona XtraDB Cluster on Kubernetes so let’s do that.

Clone the repo and get into the dir:

git clone -b release-1.3.0 https://github.com/percona/percona-xtradb-cluster-operator
cd percona-xtradb-cluster-operator

Deploy the Custom Resource Definition (CRD), add the pxc namespace, deploy the Role-Based Access Control (RBAC), the Secrets, the Operator, and finally the actual cluster:

[root@ip-192-168-1-239 percona-xtradb-cluster-operator]# kubectl apply -f deploy/crd.yaml
customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusters.pxc.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterbackups.pxc.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaxtradbclusterrestores.pxc.percona.com created
customresourcedefinition.apiextensions.k8s.io/perconaxtradbbackups.pxc.percona.com created
[root@ip-192-168-1-239 percona-xtradb-cluster-operator]# kubectl create namespace pxc
namespace/pxc created
[root@ip-192-168-1-239 percona-xtradb-cluster-operator]# kubectl config set-context $(kubectl config current-context) --namespace=pxc
Context "dgb-iam@percona1.us-east-2.eksctl.io" modified.
[root@ip-192-168-1-239 percona-xtradb-cluster-operator]# kubectl apply -f deploy/rbac.yaml
role.rbac.authorization.k8s.io/percona-xtradb-cluster-operator created
serviceaccount/percona-xtradb-cluster-operator created
rolebinding.rbac.authorization.k8s.io/service-account-percona-xtradb-cluster-operator created
[root@ip-192-168-1-239 percona-xtradb-cluster-operator]# kubectl apply -f deploy/operator.yaml
deployment.apps/percona-xtradb-cluster-operator created
[root@ip-192-168-1-239 percona-xtradb-cluster-operator]# kubectl apply -f deploy/secrets.yaml
secret/my-cluster-secrets created
[root@ip-192-168-1-239 percona-xtradb-cluster-operator]# kubectl apply -f deploy/ssl-secrets.yaml
secret/my-cluster-ssl created
secret/my-cluster-ssl-internal created
[root@ip-192-168-1-239 percona-xtradb-cluster-operator]# kubectl apply -f deploy/cr.yaml
perconaxtradbcluster.pxc.percona.com/cluster1 created

Do we have PODs?

[root@ip-192-168-1-239 percona-xtradb-cluster-operator]# kubectl get pods
NAME                                               READY   STATUS    RESTARTS   AGE
cluster1-proxysql-0                                3/3     Running   0          3m8s
cluster1-proxysql-1                                3/3     Running   0          2m45s
cluster1-proxysql-2                                3/3     Running   0          2m15s
cluster1-pxc-0                                     1/1     Running   0          3m8s
cluster1-pxc-1                                     1/1     Running   0          2m17s
cluster1-pxc-2                                     1/1     Running   0          83s
percona-xtradb-cluster-operator-745f649b97-842kd   1/1     Running   0          5m45s

Yeah, we do! Now you have yourself a PXC cluster running on K8S:

[root@ip-192-168-1-239 percona-xtradb-cluster-operator]# kubectl run -i --rm --tty percona-client --image=percona:5.7 --restart=Never -- bash -il
If you don't see a command prompt, try pressing enter.
bash-4.2$ mysql -h cluster1-proxysql -uroot -proot_password
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 294
Server version: 5.7.28 (ProxySQL)
 
Copyright (c) 2009-2019 Percona LLC and/or its affiliates
Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
 
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
mysql> show status like 'w%cluster%';
+--------------------------+--------------------------------------+
| Variable_name            | Value                                |
+--------------------------+--------------------------------------+
| wsrep_cluster_weight     | 3                                    |
| wsrep_cluster_conf_id    | 3                                    |
| wsrep_cluster_size       | 3                                    |
| wsrep_cluster_state_uuid | 293dbaa9-3935-11ea-9b85-16abbd72615e |
| wsrep_cluster_status     | Primary                              |
+--------------------------+--------------------------------------+
5 rows in set (0.02 sec)

Note that the Operator comes with ProxySQL. Here’s thedesign overview. Now, to delete the cluster (and avoid cost surprises) you should run the following command: eksctl delete cluster –region us-east-2 –name percona1.

Interested in learning more?

Be sure to get in touch with Percona’s Training Department to schedule a hands-on tutorial session with our K8S Operator. Our instructors will guide you and your team through all the setup processes, learn how to take backups, handle recovery, scale the cluster, and manage high-availability with ProxySQL.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

颠覆暴利

颠覆暴利

曾德超、张志前 / 2015-4-1 / 58

本书从金融人士的视角研究互联网时代的金融创新,全面系统地介绍了我国互联网金融的产生背景及原因,梳理了互联网金融的基本理论,分析了互联网金融的各种业态和运营模式,介绍了互联网企业和金融企业的应对策略。本书还对互联网金融存在的风险和监管进行了研究,对互联网金融未来的发展前景进行了展望。本书是学习研究互联网金融不可多得的一本参考书。一起来看看 《颠覆暴利》 这本书的介绍吧!

JSON 在线解析
JSON 在线解析

在线 JSON 格式化工具

HTML 编码/解码
HTML 编码/解码

HTML 编码/解码

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器