Supercharge Your Shallow ML Models With Hummingbird

栏目: IT技术 · 发布时间: 4年前

内容简介:Since the most recent resurgence of deep learning in 2012, a lion’s share of new ML libraries and frameworks have been created. The ones that have stood the test of time (PyTorch, Tensorflow, ONNX, etc) are backed byThis also presents a problem, however, a

Bridge the gap between traditional and deep learning with one line of code

Supercharge Your Shallow ML Models With Hummingbird

Photo by Levi Jones on Unsplash

Motivation

Since the most recent resurgence of deep learning in 2012, a lion’s share of new ML libraries and frameworks have been created. The ones that have stood the test of time (PyTorch, Tensorflow, ONNX, etc) are backed by massive corporations, and likely aren’t going away anytime soon.

This also presents a problem, however, as the deep learning community has diverged from popular traditional ML software libraries like scikit-learn, XGBoost, and LightGBM. When it comes time for companies to bring multiple models with different software and hardware assumptions into production, things get…hairy.

  • How do you keep ML inference code DRY when some models are tensor-based and others are vector-based?
  • How do you keep the inference runtime efficiency of your traditional models competitive, as GPU-based neural networks start to run circles around them?

In Search of A Uniform Model Serving Interface

I know, I know. Using microservices in Kubernetes can solve the design pattern issue to an extent by keeping things de-coupled…if that’s even what you want?

But, I think that really just ignores the problem. What if you want to seamlessly deploy either an XGBoost regressor or fully-connected DNN as your service’s main output? Sure, you could hot-swap the hardware your service launches onto. How about the code?

Are you going to ram in a dressed-up version of an if-else switch to use one software framework vs the other, depending on the model type?

Isn’t XGBoost/LightGBM Fast Enough?

Well, for a lot of use cases, it is . However, there’s still a huge gap between problems requiring neural nets and problems that can be sufficiently solved with more traditional models. For the more traditional models, don’t you still want to be able to use the latest and greatest computational frameworks to power your model’s predictions? This would allow you to scale your model up more before you need to resort to scaling it out via redundant instances.

Enter Hummingbird

Microsoft research has introduced hummingbird to bridge this gap between CPU-oriented models and tensor-oriented models. The library simply takes any of our already-trained traditional models and returns a version of that model built on tensor computations. Hummingbird aims to solve two core concerns with current ML applications:

  1. Traditional and deep learning software libraries have different abstractions of their basic computational unit (vector vs tensor).
  2. As a result of this difference, traditional ML libraries do not receive the same performance gains as hardware accelerators (read: GPUs) improve.

With Hummingbird, your ML pipelines will start to look cleaner. You’ll know that, regardless of the algorithm, you end up with a model that creates its predictions via tensor computations. Not only that, these tensor computations will be run by the same deep learning framework of choice that your organization has likely already given allegiance to.

All of this from one function call. Not a bad deal in my book!

Let’s see it in action.


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

程序开发心理学

程序开发心理学

(美)杰拉尔德·温伯格 / 邓俊辉 / 清华大学出版社 / 2004-1-1 / 39.00元

本书开创"以人为本"研究方法的先驱,在长达25年的岁月中一直保持活力,至今仍在继续。1997年,本书作者温伯格因其在软件领域的杰出贡献,被美国计算机博物馆的计算机名人堂选为首批5位成员之一。 在计算机界,还没有任何一本计算机方面的书,在初次出版之后,能够在长达25年的岁月中一直保持活力--而且这种活力到今天仍在继续。《程序开发心理学》是开创"以人为本"研究方法的先驱,它以其对程序员们在智力、......一起来看看 《程序开发心理学》 这本书的介绍吧!

SHA 加密
SHA 加密

SHA 加密工具

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器

RGB HSV 转换
RGB HSV 转换

RGB HSV 互转工具