内容简介:Cortex is an open source platform for deploying machine learning models as production web services.Cortex is designed to be self-hosted on any AWS account. You can spin up a cluster with a single command:
Deploy machine learning models in production
Cortex is an open source platform for deploying machine learning models as production web services.
Key features
- Multi framework: Cortex supports TensorFlow, PyTorch, scikit-learn, XGBoost, and more.
- Autoscaling: Cortex automatically scales APIs to handle production workloads.
- CPU / GPU support: Cortex can run inference on CPU or GPU infrastructure.
- Spot instances: Cortex supports EC2 spot instances.
- Rolling updates: Cortex updates deployed APIs without any downtime.
- Log streaming: Cortex streams logs from deployed models to your CLI.
- Prediction monitoring: Cortex monitors network metrics and tracks predictions.
- Minimal configuration: Cortex deployments are defined in a single
cortex.yaml
file.
Spinning up a cluster
Cortex is designed to be self-hosted on any AWS account. You can spin up a cluster with a single command:
# install the CLI on your machine $ bash -c "$(curl -sS https://raw.githubusercontent.com/cortexlabs/cortex/0.15/get-cli.sh)" # provision infrastructure on AWS and spin up a cluster $ cortex cluster up aws region: us-west-2 aws instance type: g4dn.xlarge spot instances: yes min instances: 0 max instances: 5 aws resource cost per hour 1 eks cluster $0.10 0 - 5 g4dn.xlarge instances for your apis $0.1578 - $0.526 each (varies based on spot price) 0 - 5 20gb ebs volumes for your apis $0.003 each 1 t3.medium instance for the operator $0.0416 1 20gb ebs volume for the operator $0.003 2 elastic load balancers $0.025 each your cluster will cost $0.19 - $2.84 per hour based on the cluster size and spot instance availability ○ spinning up your cluster ... your cluster is ready!
Deploying a model
Implement your predictor
# predictor.py class PythonPredictor: def __init__(self, config): self.model = download_model() def predict(self, payload): return self.model.predict(payload["text"])
Configure your deployment
# cortex.yaml - name: sentiment-classifier predictor: type: python path: predictor.py tracker: model_type: classification compute: gpu: 1 mem: 4G
Deploy to AWS
$ cortex deploy creating sentiment-classifier
Serve real-time predictions
$ curl http://***.amazonaws.com/sentiment-classifier \ -X POST -H "Content-Type: application/json" \ -d '{"text": "the movie was amazing!"}' positive
Monitor your deployment
$ cortex get sentiment-classifier --watch status up-to-date requested last update avg request 2XX live 1 1 8s 24ms 12 class count positive 8 negative 4
What is Cortex similar to?
Cortex is an open source alternative to serving models with SageMaker or building your own model deployment platform on top of AWS services like Elastic Kubernetes Service (EKS), Elastic Container Service (ECS), Lambda, Fargate, and Elastic Compute Cloud (EC2) and open source projects like Docker, Kubernetes, and TensorFlow Serving.
How does Cortex work?
The CLI sends configuration and code to the cluster every time you run cortex deploy
. Each model is loaded into a Docker container, along with any Python packages and request handling code. The model is exposed as a web service using Elastic Load Balancing (ELB), TensorFlow Serving, and ONNX Runtime. The containers are orchestrated on Elastic Kubernetes Service (EKS) while logs and metrics are streamed to CloudWatch.
Examples of Cortex deployments
- Sentiment analysis : deploy a BERT model for sentiment analysis.
- Image classification : deploy an Inception model to classify images.
- Search completion : deploy Facebook's RoBERTa model to complete search terms.
- Text generation : deploy Hugging Face's DistilGPT2 model to generate text.
- Iris classification : deploy a scikit-learn model to classify iris flowers.
以上所述就是小编给大家介绍的《Cortex – Open-source alternative to SageMaker for model serving》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。