PacketShader - GPU-accelerated Software Router

栏目: IT技术 · 发布时间: 5年前

内容简介:We have partially released the source code used in this work. You can find the user-level packet I/O engine for Intel 82598/82599 NICshere. We do not have a definite release plan for other parts of the PacketShader code not made available on the web as of
PacketShader - GPU-accelerated Software Router

A GPU-accelerated Software Router

New: The I/O engine is now available!

We have partially released the source code used in this work. You can find the user-level packet I/O engine for Intel 82598/82599 NICshere. We do not have a definite release plan for other parts of the PacketShader code not made available on the web as of today.

What is PacketShader?

PacketShader is a high-performance PC-based software router platform that accelerates the core packet processing with Graphics Processing Units (GPUs). Based on our observation that the CPU is the typical performance bottleneck in high-speed sofware routers, we scale the computing power in a cost-effective manner with massively-parallel GPU. PacketShader offloads computation and memory-intensive router applications to GPUs while optimizing the packet reception and transmission path on Linux. With extensive batch processing and pipelining, PacketShader achieves an unprecedented IP packet forwarding performance of 40 Gbps on an eight-core Nehalem server even for 64-byte packet size.

Why GPU?

As you all know, GPU is a central chip in your graphics card. GPUs expose a high level of processing parallelism by supporting tens of thousands of hardware threads and ample memory bandwidth. Beyond fast graphics rendering, recent GPUs are widely used for high-performance parallel applications whose workloads require enormous computation cycles and/or memory bandwidth. The data-parallel execution model of GPU fits nicely with inherent parallelism in most router applications.

Packet I/O Optimization on Linux

We implemented high-performance packet I/O engine for user-level application. This project is being maintained separately, and the source code is publicly available now.

Currently-available Linux network stack is not optimized for high-performance IP packet processing, say, for multi-10G networks. For high-speed software routers and better utilization of GPUs, we optimize the packet I/O path in Linux with the following approach.

  • Huge packet buffer: Instead of allocating metadata (sk_buff or skb) and packet data for each packet reception, PacketShader pre-allocates two circular buffers that can hold a large array of metadata and packet data. This greatly reduces the memory allocation/deallocation overhead for high-speed packet reception.
  • Batch processing: PacketShader batch processes a group of packets at a time in the hardware, device driver, and even in the application layer. This amortize per-packet processing overhead.
  • NUMA-aware data placement: PacketShader minimizes packet movement between local and remote memory in a Non-Uniform Memory Access (NUMA) system. Packets received by NICs are processed by its local CPU and memory.
  • Multi-core CPU scalability: PacketShader takes advantage of receive-side scaling (RSS) to eliminate the lock contention in accessing the NIC queues. It also removes the false sharing problem with the CPU cache by aligning the start address of RX queue to the cacheline boundary. Finally, it removes the global NIC counter for statistics. These optimizations allow linear scalability for multi-core router systems.

With our packet I/O optimization, we are able to run the packet processing in the user level even for multi-10G router workloads.

Performance

Figure 1 shows the performance of our optimized packet I/O engine. RX+TX bars represent the case of no-op forwarding, which transmits a packet from a port to another port without further processing.

PacketShader - GPU-accelerated Software Router

Figure 1. Packet I/O throughput over various packet sizes

We have implemented four "router applications" based on the packet I/O engine: IPv4 forwarding, IPv6 forwarding, OpenFlow switch, and IPsec tunneling. The below four graphs compare the throughput of the CPU-only implementation and the GPU-accelerated implementation. The performance results clearly show the effectiveness of GPU for packet processing.

PacketShader - GPU-accelerated Software Router

Figure 2. IPv4 forwarding

PacketShader - GPU-accelerated Software Router

Figure 3. IPv6 forwarding

For the IP forwarding, we offloaded longest prefix matching to GPU. Forwarding table lookup is highly memory-intensive, and GPU can acclerate it with both latency hiding capability and bandwidth.

PacketShader - GPU-accelerated Software Router

Figure 4. OpenFlow switch

PacketShader - GPU-accelerated Software Router

Figure 5. IPsec tunneling (AES-CTR and SHA1)

OpenFlow and IPsec represent compute-intensive workloads of software routers in our work. We have confirmed that compute-intensive applications can benefit from GPU as well as memory-intensive applications.

Current Status and Bottleneck

Our prototype implementation uses two four-core Intel Nehalem CPUs (2.66GHz), four dual-port 10GbE Intel NICs, and two NVIDIA GTX 480 cards. Since we use many PCI-e devices, our machine adopts two IOHs (formerly called Northbridge). Interestingly, the performance of our system is limited by the dual-IOH capacity. Specifically, we see asymmetric performance between the host-to-device and device-to-host PCI-e throughputs (more detail in our SIGCOMM paper below). Due to this problem, our current system cannot produce more than 40 Gbps performance even if both CPU and GPU are not the bottleneck.

Press Coverage

Publications

People

Students: Sangjin Han and Keon Jang

Faculty:KyoungSoo Park and Sue Moon

We are collectively reached by our mailing list: tengig at an.kaist.ac.kr.


以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

思想的未来

思想的未来

(美)劳伦斯﹒莱斯格 / 李旭 / 中信出版社 / 2004-10 / 29.00元

因特网革命已到来,一些人说它已经过去,革命缘何而来?又缘何而去呢? 劳伦斯·莱斯格对因特网革命中为何会出现一种反革命的破坏性力量及后果做出了解释。创作之所以繁荣,是因为因特网保护了创新的公共资源。是因为因特网保护了创新的公共资源。因特网的独特设计营造出一个中立的平台。最广大范围的作者们可在此平台上进行试验。围绕此平台的法律架构对这一自由空间给予了保护,以使文化和信息——我们这个时代的......一起来看看 《思想的未来》 这本书的介绍吧!

CSS 压缩/解压工具
CSS 压缩/解压工具

在线压缩/解压 CSS 代码

XML、JSON 在线转换
XML、JSON 在线转换

在线XML、JSON转换工具

正则表达式在线测试
正则表达式在线测试

正则表达式在线测试