内容简介:Since the most recent resurgence of deep learning in 2012, a lion’s share of new ML libraries and frameworks have been created. The ones that have stood the test of time (PyTorch, Tensorflow, ONNX, etc) are backed byThis also presents a problem, however, a
Bridge the gap between traditional and deep learning with one line of code
Jul 7 ·7min read
Motivation
Since the most recent resurgence of deep learning in 2012, a lion’s share of new ML libraries and frameworks have been created. The ones that have stood the test of time (PyTorch, Tensorflow, ONNX, etc) are backed by massive corporations, and likely aren’t going away anytime soon.
This also presents a problem, however, as the deep learning community has diverged from popular traditional ML software libraries like scikit-learn, XGBoost, and LightGBM. When it comes time for companies to bring multiple models with different software and hardware assumptions into production, things get…hairy.
- How do you keep ML inference code DRY when some models are tensor-based and others are vector-based?
- How do you keep the inference runtime efficiency of your traditional models competitive, as GPU-based neural networks start to run circles around them?
In Search of A Uniform Model Serving Interface
I know, I know. Using microservices in Kubernetes can solve the design pattern issue to an extent by keeping things de-coupled…if that’s even what you want?
But, I think that really just ignores the problem. What if you want to seamlessly deploy either an XGBoost regressor or fully-connected DNN as your service’s main output? Sure, you could hot-swap the hardware your service launches onto. How about the code?
Are you going to ram in a dressed-up version of an if-else switch to use one software framework vs the other, depending on the model type?
Isn’t XGBoost/LightGBM Fast Enough?
Well, for a lot of use cases, it is . However, there’s still a huge gap between problems requiring neural nets and problems that can be sufficiently solved with more traditional models. For the more traditional models, don’t you still want to be able to use the latest and greatest computational frameworks to power your model’s predictions? This would allow you to scale your model up more before you need to resort to scaling it out via redundant instances.
Enter Hummingbird
Microsoft research has introduced hummingbird
to bridge this gap between CPU-oriented models and tensor-oriented models. The library simply takes any of our already-trained traditional models and returns a version of that model built on tensor computations. Hummingbird aims to solve two core concerns with current ML applications:
- Traditional and deep learning software libraries have different abstractions of their basic computational unit (vector vs tensor).
- As a result of this difference, traditional ML libraries do not receive the same performance gains as hardware accelerators (read: GPUs) improve.
With Hummingbird, your ML pipelines will start to look cleaner. You’ll know that, regardless of the algorithm, you end up with a model that creates its predictions via tensor computations. Not only that, these tensor computations will be run by the same deep learning framework of choice that your organization has likely already given allegiance to.
All of this from one function call. Not a bad deal in my book!
Let’s see it in action.
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。