内容简介:While one of
Pipeline is Banzai Cloud’s Kubernetes container management platform, which allows enterprises to develop, deploy and securely scale container-based applications in multi- and hybrid-cloud environments.
While one of Pipeline's core features is to automate the provisioning of Kubernetes clusters across major cloud providers, including Amazon, Azure, Google, Oracle, Alibaba Cloud and on-premise environments ( VMware and bare metal ), we strongly believe that Kubernetes as a Service should be capable of much more.
Pipeline has been a key enabler of multi- and hybrid-cloud strategies, providing both a unified cockpit for operations and a high level of workflow and workload portability for developers across major cloud and datacenters - in four different ways
A hybrid cloud is a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).
Requirements of Kubernetes as a Service platforms
Today, we can provision Kubernetes clusters with the push of a button, a single CLI command, or a RESTful API call. Let's take a look at the bare minimum features of a Kubernetes as a Service (KaaS) platform, out-of-the-box:
-
Flexibility
- The ability to use your favorite cloud provider, datacenter, or even Bring your own Kubernetes and any combination of the above
-
The option to choose between the Kubernetes distribution of your preference
:
- use Banzai Cloud’s CNCF certified Kubernetes distributionPKE anywhere (both in the cloud and in datacenters), or the
- distributions managed by the cloud provider ( Pipeline supports Alibaba ACK, Amazon EKS, Azure AKS, Google GKE, and Oracke OKE)
-
Operational stability and monitoring
- Seamless upgrading of Kubernetes clusters to newer versions while keeping the SLOs
- Disaster recovery with periodic backups and the ability to do full cluster state restores from snapshots
- Centralized log collection (application, host, Kubernetes, audit logs, etc) from all the clusters
- Federated monitoring and dashboards to give insight into your clusters and applications, with default alerts
- Correlation between metrics and logs
- Easy to use dashboards
- A control plane to manage clusters running in multiple locations and provide a single and unified view
- Unified storage API
-
Cost-efficiency
- Multi-dimensional autoscaling (for both clusters and applications) based on custom metrics
- The option to save costs with spot and preemptible instances while maintaining SLAs
-
Security
- Secure storage of secrets (cloud credentials, keys, certificates, passwords, etc.) in Vault
- Direct injection of secrets into pods (bypassing K8s secrets)
- Security scans throughout the entire deployment lifecycle
- DNS and certificate management for your workloads
-
Customization and integration with external services
- Integration with enterprise services like Docker registries, Git, AAA or SIEM providers (Active Directory, LDAP, OpenID, Gitlab, GitHub Enterprise, etc.)
- Custom and configurable K8s schedulers
- Multiple ingress controllers
- Helm repository as a service
- Catalog of production ready deployments of popular application frameworks or stacks as Kafka, Istio, Spark, Zeppelin, Tensorflow, Spring, NodeJS, etc.
Requirements of a Kubernetes as a Service control plane
By going through the list above on what we believe would be the bare minimum out of the box features of a Kubernetes as a Service platform should offer, we realized that there must be lots of components (roughly 40+) running on the control plane. These open source components are selected from the (in)famous CNCF landscape . Pipeline makes them work together seamlessly, and provides all the necessary glue code of configuration, resiliency, security, scaling, external integrations, what's more, it provides a rich UI, CLI and API to manage them with ease.
While we were designing Pipeline
we envisioned customers with diverse levels of Kubernetes familiarity getting stuck in yaml
hell. For both Pipeline
and the Pipeline control plane (called Pipeline Installer
) our design principles were clear:
- Complexity should be kept at a Heroku-level of simplicity.
- On the other hand, we wanted clients to have the ability to overwrite any of the default settings or replace any of Pipeline's components.
- Anything in between was fair game.
The universal
tool that resulted from these principles was the Pipeline Installer (part of the banzai-cli
, which allows you to install and configure your own Kubernetes as a Service control plane on your favorite environment and kickstart your Kubernetes service provider
experience in minutes.
Let's go into more detail about our design principles:
- It must be very easy to kickstart the experience and have a control plane with minimum requirements using the Pipeline Installer CLI. You only need Docker or containerd on the machine(s) that will run the Kubernetes as a Service control plane.
- It should run on multiple Linux distributions for production deployments, but also on macOS for quick experimentation.
- It should require no additional knowledge or tooling beyond Kubernetes. All of the easy or complex configurations should be familiar for anybody who’s already familiar with Kubernetes.
- It has to be extensible. The open source version of Pipeline follows the “batteries included but replaceable” principle, but it should allow custom customer specific extensions. Over the last three years of operating Kubernetes clusters for our customers we've learned that each customer environment is different, and the control plane should allow for the easy replacement of components and configurations.
-
End users willing to use the Kubernetes as a Service platform would like to host the control plane on their preferred environment ranging from
KIND
to any Kubernetes distribution or environment (PKE, EC2, EKS, GKE, etc.).
Let's take a look at how you can become the operator of your own (or someone else's) Kubernetes as a Service platform, in minutes.
Install and configure the Pipeline platform
The easiest way to kickstart your KaaS experience is to follow along withPipeline's extensive documentation. As mentioned above, the control plane can run on multiple supported environments, so choose your preferred one from the quickstart guide, here . Have a quick look, but assuming you’d like to run the control plane an Amazon EC2, the installation is as simple as :
banzai pipeline up --provider=ec2
Setting aside simplicity for a moment, what's most exciting is your ability to customize the capabilities of the control plane and thus the features of the Kubernetes clusters launched with Pipeline . Let's go through 3 different setups with multiple configuration examples. Note that Banzai Cloud customers receive their own generated documentation based on their requirements. For example, the selected cloud or datacenter, load balancer, certificate management option, preferred authentication/authorization provider, et cetera.
Let's assume you’d like to set up the control plane on an EC2 instance which is securely accessible for others, so they can start using platform features. Once you have downloaded the Banzai CLI (curl https://getpipeline.sh | sh) and itsprerequisites have been set (either Docker or containerd is installed on the machine where you are running the CLI) you can run:
banzai pipeline up --init --workspace=installer-ec2-test --provider=ec2
Let's see what this simple banzai
CLI command does behind the scenes:
- It launches an instance on Amazon EC2.
- Once the VM is up and running it installs our CNCF-certified Kubernetes distribution,PKE. By now you've probably come to assume that the Pipeline control plane is running in Kubernetes as well.
- Using Helm and Kubernetes manifests it installs all the control plane components.
Let's go through some of the components it installs and are essential for a cloud-agnostic Kubernetes as a Service provider:
-
Pipeline
- does all the
glue
and heavy lifting - Pipeline UI - a highly refined and intuitive UI to manage your Kubernetes clusters, deployments, and all theintegrated services (logging, monitoring, security scans, authn/z, disaster recovery, DNS, storage, ingress, etc).
- Telescopes - a cost-aware recommender system which turns resource requirements into infrastructure.
- Cloudinfo - Cloud instance type and price information as a service. Tracks the price and availability of cloud instance types and services, and provides the necessary meta information for Telescopes to recommend like for like infrastructures across clouds.
- Bank-Vaults - a Vault operator and secret injection webhook.
- Dex - an OpenID Connect Identity (OIDC) and OAuth 2.0 Provider with Pluggable Connectors, maintained by Banzai Cloud.
- Cadence - a distributed, scalable, durable, and highly available orchestration engine to execute asynchronous long-running business logic in a scalable and resilient way.
- A database for Pipeline (MySQL by default, others solutions like PostgreSQL are supported as well).
Once the installation is ready, the CLI will output the access and login details of the control plane (can be customized):
pipeline-address = https://ec2-xx-yyy-4-zzz.us-west-1.compute.amazonaws.com/ pipeline-password = xyzackead3 pipeline-username = admin@example.com
Once you have logged in, you're ready to start spinning up clusters through the UI or CLI, and use all of the features that come enabled with the default installation.
The Pipeline Installer ( banzai-cli
) supports working with multiple workspaces as seen above. Workspaces allow you to manage multiple Pipeline installations on a per environment or per team basis. The Installer also lets you share your workspace through version control, so multiple administrators can work in the same workspaces, and parallel executions can be prevented with built-in “locks”. Workspaces hold all the necessary information that is required to setup a fully functional Pipeline installation, from encrypted secrets to configuration files and cloud states. You can specify a workspace path via the --workspace
flag; if not otherwise specified, a default
workspace is used. We at Banzai Cloud manage multiple installations of Pipeline, ranging from our free service
to multiple internal development environments and customer installations.
In this setup we showcase a customer setup which uses an EC2 instance to host the control plane, requires a high level of customization, and also uses the managed services of a cloud provider:
- Instead of usingPKE, the control plane runs on EKS
- Instead of Pipeline’s own databases it uses Amazon Aurora as the Pipeline persistent store
- It uses AWS KMS Encryption
- For authentication and authorization it uses Google Auth (thus Dex is configured to use Google as an OAuth2 provider)
Now, let's examine the process by which this is done. The banzai
CLI is highly extensible, can run available CLI commands on extended or customer-specific Docker images (delivered as part of a commercial subscription package), and is configured with the de-facto language of Kubernetes, yaml
. In our case, the same CLI command will launch an EKS cluster on Amazon, configure an autoscaling nodepool or managed nodepool, set and integrate the service endpoints, and so on. Let's see the yaml
snippet in question:
providerConfig: cluster_name: pipeline-prod region: us-west-2 tags: banzaicloud-pipeline-controlplane-uuid: f834d6a7-9c0c-4231-96d7-d2bb57ec9aa8 public_access_cidrs: - 1.2.3.4/32 kubernetes_version: 1.15 endpoint_private_access: true endpoint_public_access: true node_pools: - name: pool1 ami_type: "AL2_x86_64" spot_price: "0.249" desired_capacity: 3 instance_type: "c4.xlarge" key_name: admin-key max_size: 6 min_size: 3
The following snippet allows the user to customize the Amazon RDS instance, also managed by Pipeline :
mysql: deletion_protection: true create_db_parameter_group: true parameter_group_family: mysql5.7 parameter_group_name: alpha-mysql-57 use_parameter_group_name_prefix: false parameter_group_description: Parameter groups for XYZ parameters: - name: "max_allowed_packet" value: "1073741824"
You can also use and customize AWS certificates:
traefik: ... service: annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-xxx-x:123456789012:certificate/12345678-90ab-cdef-ghij-klmnopqrstuv service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-ssl-ports: 443
You can configure a central location to store all the logs, in this example in S3:
logging: ... s3: enabled: true region: "us-east-1" bucket: "customer-log-bucket" keyAuth: enabled: true accessKey: ${AWS_ACCESS_KEY_ID} secretKey: ${AWS_SECRET_ACCESS_KEY}
You can see how flexible and extensible the control plane is, while keeping the same CLI simplicity (configs are in yaml
which the CLI understands and manages, in order to use the appropriate images (default or custom) as required by your environment).
Contact us to learn more about available customizations in the commercial version of Pipeline , or if you have your own custom requirements which are not available in the open source version.
A managed setup by Banzai Cloud
Banzai Cloud has been running a hosted and managed Pipeline environment . This is a totally free Pipeline control plane, managing Kubernetes clusters for over 2000 users across 5 clouds. The platform is mainly used for test and evaluation purposes, but we also know several hundreds users who start their production clusters and apply all the supported features from the instance. There is an active support community around it on Slack and GitHub.
Whether Heroku-like simplicity or deep yaml
configurations are your thing, you can find both in Pipeline, the universal Kubernetes as a Service platform. Give us a try
and let us know how it works!
AboutBanzai Cloud
Banzai Cloud is changing how private clouds are built: simplifying the development, deployment, and scaling of complex applications, and putting the power of Kubernetes and Cloud Native technologies in the hands of developers and enterprises, everywhere.
#multicloud #hybridcloud #BanzaiCloud
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。