Cap'n Proto 0.8: Streaming flow control, HTTP-over-RPC, fibers, etc

栏目: IT技术 · 发布时间: 4年前

内容简介:Today I’m releasing Cap’n Proto 0.8.It is commonly believed, wrongly, that Cap’n Proto doesn’t support “streaming”, in the way that gRPC does. In fact, Cap’n Proto’s object-capability model and promise pipelining make it much more expressive than gRPC. In

Get it now »

Today I’m releasing Cap’n Proto 0.8.

What’s new?

  • Multi-stream Flow Control
  • HTTP-over-Cap’n-Proto
  • Lots and lots of minor tweaks and fixes.

Multi-stream Flow Control

It is commonly believed, wrongly, that Cap’n Proto doesn’t support “streaming”, in the way that gRPC does. In fact, Cap’n Proto’s object-capability model and promise pipelining make it much more expressive than gRPC. In Cap’n Proto, “streaming” is just a pattern, not a built-in feature.

Streaming is accomplished by introducing a temporary RPC object as part of a call. Each streamed message becomes a call to the temporary object. Think of this like providing a callback function in an object-oriented language.

For instance, server -> client streaming (“returning multiple responses”) can look like this:

# NOT NEW: Server -> client streaming example.
interface MyInterface {
  streamingCall @0 (callback :Callback) -> ();

  interface Callback {
    sendChunk @0 (chunk :Data) -> ();
  }
}

Or for client -> server streaming, the server returns a callback:

# NOT NEW: Client -> Server streaming example.
interface MyInterface {
  streamingCall @0 () -> (callback :Callback);

  interface Callback {
    sendChunk @0 (chunk :Data) -> ();
  }
}

Note that the client -> server example relies onpromise pipelining: When the client invokes streamingCall() , it does NOT have to wait for the server to respond before it starts making calls to the callback . Using promise pipelining (which has been a built-in feature of Cap’n Proto RPC since its first release in 2013), the client sends messages to the server that say: “Once my call to streamingCall() call is finished, take the returned callback and call this on it.”

Obviously, you can also combine the two examples to create bidirectional streams. You can also introduce “callback” objects that have multiple methods, methods that themselves return values (maybe even further streaming callbacks!), etc. You can send and receive multiple new RPC objects in a single call. Etc.

But there has been one problem that arises in the context of streaming specifically: flow control. Historically, if an app wanted to stream faster than the underlying network connection would allow, then it could end up queuing messages in memory. Worse, if other RPC calls were happening on the same connection concurrently, they could end up blocked behind these queued streaming calls.

In order to avoid such problems, apps needed to implement some sort of flow control strategy. An easy strategy was to wait for each sendChunk() call to return before starting the next call, but this would incur an unnecessary network round trip for each chunk. A better strategy was for apps to allow multiple concurrent calls, but only up to some limit before waiting for in-flight calls to return. For example, an app could limit itself to four in-flight stream calls at a time, or to 64kB worth of chunks.

This sort of worked, but there were two problems. First, this logic could get pretty complicated, distracting from the app’s business logic. Second, the “N-bytes-in-flight-at-a-time” strategy only works well if the value of N is close to the bandwidth-delay product (BDP) of the connection. If N was chosen too low, the connection would be under-utilized. If too high, it would increase queuing latency for all users of the connection.

Cap’n Proto 0.8 introduces a built-in feature to manage flow control. Now, you can declare your streaming calls like this:

interface MyInterface {
  streamingCall @0 (callback :Callback) -> ();

  interface Callback {
    # NEW: This streaming call features flow control!
    sendChunk @0 (chunk :Data) -> stream;
    done @1 ();
  }
}

Methods declared with -> stream behave like methods with empty return types ( -> () ), but with special behavior when the call is sent over a network connection. Instead of waiting for the remote site to respond to the call, the Cap’n Proto client library will act as if the call has “returned” as soon as it thinks the app should send the next call. So, now the app can use a simple loop that calls sendChunk() , waits for it to “complete”, then sends the next chunk. Each call will appear to “return immediately” until such a time as Cap’n Proto thinks the connection is fully-utilized, and then each call will block until space frees up.

When using streaming, it is important that apps be aware that error handling works differently. Since the client side may indicate completion of the call before the call has actually executed on the server, any exceptions thrown on the server side obviously cannot propagate to the client. Instead, we introduce a new rule: If a streaming call ends up throwing an exception, then all later method invocations on the same object (streaming or not) will also throw the same exception. You’ll notice that we added a done() method to the callback interface above. After completing all streaming calls, the caller must call done() to check for errors. If any previous streaming call failed, then done() will fail too.

Under the hood, Cap’n Proto currently implements flow control using a simple hack: it queries the send buffer size of the underlying network socket, and sets that as the “window size” for each stream. The operating system will typically increase the socket buffer as needed to match the TCP congestion window, and Cap’n Proto’s streaming window size will increase to match. This is not a very good implementation for a number of reasons. The biggest problem is that it doesn’t account for proxying: with Cap’n Proto it is common to pass objects through multiple nodes, which automatically arranges for calls to the object to be proxied though the middlemen. But, the TCP socket buffer size only approximates the BDP of the first hop. A better solution would measure the end-to-end BDP using an algorithm like BBR . Expect future versions of Cap’n Proto to improve on this.

Note that this new feature does not come with any change to the underlying RPC protocol! The flow control behavior is implemented entirely on the client side. The -> stream declaration in the schema is merely a hint to the client that it should use this behavior. Methods declared with -> stream are wire-compatible with methods declared with -> () . Currently, flow control is only implemented in the C++ library. RPC implementations in other languages will treat -> stream the same as -> () until they add explicit support for it. Apps in those languages will need to continue doing their own flow control in the meantime, as they did before this feature was added.

HTTP-over-Cap’n-Proto

Cap’n Proto 0.8 defines a protocol for tunnelling HTTP calls over Cap’n Proto RPC , along with an adapter library adapting it to the KJ HTTP API . Thus, programs written to send or receive HTTP requests using KJ HTTP can easily be adapted to communicate over Cap’n Proto RPC instead. It’s also easy to build a proxy that converts regular HTTP protocol into Cap’n Proto RPC and vice versa.

In principle, http-over-capnp can achieve similar advantages to HTTP/2: Multiple calls can multiplex over the same connection with arbitrary ordering. But, unlike HTTP/2, calls can be initiated in either direction, can be addressed to multiple virtual endpoints (without relying on URL-based routing), and of course can be multiplexed with non-HTTP Cap’n Proto traffic.

In practice, however, http-over-capnp is new, and should not be expected to perform as well as mature HTTP/2 implementations today. More work is needed.

We use http-over-capnp in Cloudflare Workers to communicate HTTP requests between components of the system, especially into and out of sandboxes. Using this protocol, instead of plain HTTP or HTTP/2, allows us to communicate routing and metadata out-of-band (rather than e.g. stuffing it into private headers). It also allows us to design component APIs using an object-capability model , which turns out to be an excellent choice when code needs to be securely sandboxed.

Today, our use of this protocol is fairly experimental, but we plan to use it more heavily as the code matures.

KJ improvements

KJ is the C++ toolkit library developed together with Cap’n Proto’s C++ implementation. Ironically, most of the development in the Cap’n Proto repo these days is actually improvements to KJ, in part because it is used heavily in the implementation of Cloudflare Workers .

  • The KJ Promise API now supports fibers. Fibers allow you to execute code in a synchronous style within a thread driven by an asynchronous event loop. The synchronous code runs on an alternate call stack. The code can synchronously wait on a promise, at which point the thread switches back to the main stack and runs the event loop. We generally recommend that new code be written in asynchronous style rather than using fibers, but fibers can be useful in cases where you want to call a synchronous library, and then perform asynchronous tasks in callbacks from said library. See the pull request for more details.
  • New API kj::Executor can be used to communicate directly between event loops on different threads. You can use it to execute an arbitrary lambda on a different thread’s event loop. Previously, it was necessary to use some OS construct like a pipe, signal, or eventfd to wake up the receiving thread.
  • KJ’s mutex API now supports conditional waits, meaning you can unlock a mutex and sleep until such a time as a given lambda function, applied to the mutex’s protected state, evaluates to true.
  • The KJ HTTP library has continued to be developed actively for its use in Cloudflare Workers . This library now handles millions of requests per second worldwide, both as a client and as a server (since most Workers are proxies), for a wide variety of web sites big and small.

Towards 1.0

Cap’n Proto has now been around for seven years, with many huge production users (such as Cloudflare). But, we’re still on an 0.x release? What gives?

Well, to be honest, there are still a lot of missing features that I feel like are critical to Cap’n Proto’s vision, the most obvious one being three-party handoff. But, so far I just haven’t had a real production need to implement those features. Clearly, I should stop waiting for perfection.

Still, there are a couple smaller things I want to do for an upcoming 1.0 release:

AsyncOutputStream

I aim to get these done sometime this summer…


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Java性能优化权威指南

Java性能优化权威指南

Charlie Hunt、Binu John / 柳飞、陆明刚 / 人民邮电出版社 / 2014-3 / 109.00 元

Java性能优化圣经!Java之父重磅推荐! 本书由曾任职于Oracle/Sun的性能优化专家编写,系统而详细地讲解了性能优化的各个方面,帮助你学习Java虚拟机的基本原理、掌握一些监控Java程序性能的工具,从而快速找到程序中的性能瓶颈,并有效改善程序的运行性能。 Java性能优化的任何问题,都可以从本书中找到答案!一起来看看 《Java性能优化权威指南》 这本书的介绍吧!

在线进制转换器
在线进制转换器

各进制数互转换器

随机密码生成器
随机密码生成器

多种字符组合密码

RGB CMYK 转换工具
RGB CMYK 转换工具

RGB CMYK 互转工具