sidekick is a high-performance sidecar load-balancer. By attaching a tiny load balancer as a sidecar to each of the client application processes, you can eliminate the centralized loadbalancer bottleneck and DNS failover management. sidekick automatically avoids sending traffic to the failed servers by checking their health via the readiness API and HTTP error returns.
Table of Contents
Download
Download Binary Releases for various platforms.
Usage
USAGE: sidekick [FLAGS] ENDPOINTs... sidekick [FLAGS] ENDPOINT{1...N} FLAGS: --address value, -a value listening address for sidekick (default: ":8080") --health-path value, -p value health check path --health-duration value, -d value health check duration in seconds (default: 5) --insecure, -i disable TLS certificate verification --log , -l enable logging --trace, -t enable HTTP tracing --quiet disable console messages --json output sidekick logs and trace in json format --debug output verbose trace --help, -h show help --version, -v print the version
Examples
- Load balance across a web service using DNS provided IPs.
$ sidekick --health-path=/ready http://myapp.myorg.dom
- Load balance across 4 MinIO Servers ( http://minio1:9000 to http://minio4:9000 )
$ sidekick --health-path=/minio/health/ready --address :8000 http://minio{1...4}:9000
- Load balance across 16 MinIO Servers ( http://minio1:9000 to http://minio16:9000 )
$ sidekick --health-path=/minio/health/ready http://minio{1...16}:9000
Realworld Example with spark-orchestrator
As spark driver , executor sidecars, to begin with install spark-operator and MinIO on your kubernetes cluster
optionalcreate a kubernetes namespace spark-operator
kubectl create ns spark-operator
Configure spark-orchestrator
We shall be using maintained spark operator by GCP at https://github.com/GoogleCloudPlatform/spark-on-k8s-operator
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator helm install spark-operator incubator/sparkoperator --namespace spark-operator --set sparkJobNamespace=spark-operator --set enableWebhook=true
Install MinIO
helm install minio-distributed stable/minio --namespace spark-operator --set accessKey=minio,secretKey=minio123,persistence.enabled=false,mode=distributed
NOTE: persistence is disabled here for testing, make sure you are using persistence with PVs for production workload. For more details read our helm documentation
Once minio-distributed is up and running configure mc
and upload some data, we shall choose mybucket
as our bucketname.
Port-forward to access minio-cluster locally.
kubectl port-forward pod/minio-distributed-0 9000
Create bucket named mybucket
and upload some text data for spark word count sample.
mc config host add minio-distributed http://localhost:9000 minio minio123 mc mb minio-distributed/mybucket mc cp /etc/hosts minio-distributed/mybucket/mydata/{1..4}.txt
Run the spark job
apiVersion: "sparkoperator.k8s.io/v1beta2" kind: SparkApplication metadata: name: spark-minio-app namespace: spark-operator spec: sparkConf: spark.kubernetes.allocation.batch.size: "50" hadoopConf: "fs.s3a.endpoint": "http://127.0.0.1:9000" "fs.s3a.access.key": "minio" "fs.s3a.secret.key": "minio123" "fs.s3a.path.style.access": "true" "fs.s3a.impl": "org.apache.hadoop.fs.s3a.S3AFileSystem" type: Scala sparkVersion: 2.4.5 mode: cluster image: minio/spark:v2.4.5-hadoop-3.1 imagePullPolicy: Always restartPolicy: type: OnFailure onFailureRetries: 3 onFailureRetryInterval: 10 onSubmissionFailureRetries: 5 onSubmissionFailureRetryInterval: 20 mainClass: org.apache.spark.examples.JavaWordCount mainApplicationFile: "local:///opt/spark/examples/target/original-spark-examples_2.11-2.4.6-SNAPSHOT.jar" arguments: - "s3a://mytestbucket/mydata" driver: cores: 1 coreLimit: "1000m" memory: "512m" labels: version: 2.4.5 sidecars: - name: minio-lb image: "minio/sidekick:v0.1.4" imagePullPolicy: Always args: ["--health-path", "/minio/health/ready", "--address", ":9000", "http://minio-distributed-{0...3}.minio-distributed-svc.spark-operator.svc.cluster.local:9000"] ports: - containerPort: 9000 executor: cores: 1 instances: 4 memory: "512m" labels: version: 2.4.5 sidecars: - name: minio-lb image: "minio/sidekick:v0.1.4" imagePullPolicy: Always args: ["--health-path", "/minio/health/ready", "--address", ":9000", "http://minio-distributed-{0...3}.minio-distributed-svc.spark-operator.svc.cluster.local:9000"] ports: - containerPort: 9000
kubectl create -f spark-job.yaml kubectl logs -f --namespace spark-operator spark-minio-app-driver spark-kubernetes-driver
Roadmap
- S3 Cache : Use an S3 compatible object storage for shared cache storage
以上所述就是小编给大家介绍的《minio/sidekick》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
大数据架构商业之路
黄申 / 机械工业出版社 / 2016-5-1 / 69.00元
目前大数据技术已经日趋成熟,但是业界发现与大数据相关的产品设计和研发仍然非常困难,技术、产品和商业的结合度还远远不够。这主要是因为大数据涉及范围广、技术含量高、更新换代快,门槛也比其他大多数IT行业更高。人们要么使用昂贵的商业解决方案,要么花费巨大的精力摸索。本书通过一个虚拟的互联网O2O创业故事,来逐步展开介绍创业各个阶段可能遇到的大数据课题、业务需求,以及相对应的技术方案,甚至是实践解析;让读......一起来看看 《大数据架构商业之路》 这本书的介绍吧!