内容简介:Thanks DarinIf you have found any problem please feel free to drop comment or DM me on
ExpressJS vs Actix-Web. It is exactly what you think
Feb 25 ·7min read
The goal of this analysis is to try to understand what kind of gains a programmer should expect by using Rust and actix-web rather than Node and Express under typical use, without custom optimizations.
Summary
- Compare performance, stability and running cost of simple microservice-like workload on node/expressJS Vs actix. Setup is limited by 1 CPU Core and using a common pattern of development for each.
- Actix provides 85% running cost saving in a heavily loaded environments, while also providing smaller memory footprint and more runtime safety warranties. It also has an option to scale on CPU with minimum memory expense which unveils more saving opportunities.
- Rust’s types and concurrent logic validation saved time providing reliable working code after it compiled, Node’s solution required some time to debug and fix issues in runtime as well as explicit type validation.
- ExpressJS is fast, minimalist and most popular Node.js web framework. Node.js is a JavaScript runtime built on Chrome’s V8 JavaScript engine . It is among top 10 popular application servers used by high traffic sites according to w3techs research as it’s share is growing.
- Actix is a small, pragmatic, and extremely fast web framework for Rust. According to techempower round 18 report is fastest web application platform in 4 out of 6 categories.
Disclaimer!! Provided research is focusing purely on quality aspects and cost of running apps. It should not be taken as advice to rewrite from JavaScript to Rust and it does not account for cost of such rewrite.
Test scenario and setup
Presume we have microservice which allows its clients to search for tasks where tasks might be assigned to workers. Our simple database model has just two tables WORKER and TASK, and TASK has assignee relation to WORKER :
Let’s compare how application servers behave and check on resource consumption under moderate to high load. Let’s assume that database is not a bottleneck for now (we’ll leave it for different post).
Node.js app is built with express web framework and knex SQL builder library, both are widely popular. Knex claims to use prepared SQL statements under the hood and has embedded pool so it should be comparable to the Rust setup. Main handler:
Actix-web service is based on async-pg example . It uses deadpool-postgres as async connection pool and tokio-pg-mapper for SQL type bindings. Main route handler looks similar to the express one except that type checks on parameters performed based on type definition:
Load test
Testing performed on Ubuntu 18 Xeon E5–2660 with 40 cores . Database initialized with randomly generated 100_000 tasks assigned to 1000 workers.
Using wrk we are going to load server with a similar requests over a number of concurrent connections with predefined request. This should suit our case well as our server is not using cache for search.
We’ll need to run multiple tests with measurements of CPU, memory, latency and rps, so automation would pay off. Rust has a rich toolbox for writing CLI automation, including config , structopt and resources monitoring so it was quite easy to automate . Tests which will follow can be executed via:
cargo run --bin benchit --release -- -m -t 30
1. Fetch top 10 tasks
Search will return same first 10 records without “fat” description field, each row data is under 500b. It will be light for SQL server and should provide maximum requests per second. Test will be performed with the following command line under the hood:
wrk -c <concurrency> -d 30s "http://<host>:<port>/tasks"
For comparison of actix-web with nodeJS we will limit actix to 1 Core (e.g. 1 worker) initially, which will execute our web processing code. Deadpool will be configured with 15 connections and will handle database communication in parallel.
WORKERS=1 cargo run --release --bin actix-bench
Node.js executes JavaScript in a single thread and uses multiple threads for running asynchronous communication, we will configure knex to use 5..15 connections in the pool and this should match deadpool configuration:
cd node-bench; npm run start
Looking at the concurrency
and rps
even with a small load actix provides at least 2 times faster responses than node and under heavy load it is 6x faster. If we look at the database utilization actix seems performing 5 times more efficient. If we look at the response times, node.js for concurrency = 4
provides similar delay (~4ms) as actix with concurrency = 32
, which is 8 times better. From the memory footprint point of view it’s even better, on a long run with 1 million requests memory for actix service was around 26MB, while node.js grew up to 104MB, hello gc!
Conclusion: for loaded systems we will need 6 times less CPU Cores with actix than with node.js which is 85% cost saving.
Actix is super efficient when using multiple CPU Cores: fixed set of worker threads ( workers
) minimizes context switches. By default actix starts as many worker threads as there are CPU Cores.
Let’s see how well actix can scale on CPU with 4 workers
and compare with 1 worker
Actix setup:
WORKERS=4 PG_POOL_MAX_SIZE=30 cargo run --release --bin actix-bench
With 4 workers actix provides about 3 times better response time than actix with 1 worker on a heavy load, also providing 3 times more responses per second. Actix scales well on CPU, keeping memory consumption almost at the same level nevertheless of number of CPU’s occupied. Node can also scale on CPU running new process per CPU, which would duplicate memory usage for every new instance.
Actix can fully utilize cheap 2+ core cloud instances, while Node would require more expensive instances with more memory.
2. Search and fetch tasks with description
This search will return 10 matches by indexed varchar summary LIKE ..
also fetching ~3–5kb TEXT
description field. It definitely will put pressure on SQL server.
wrk -c <concurrency> -d 30s "http://<host>:<port>/tasks?summary=wherever&full=true&limit=10
Test will be heavy on DB hence we will run with 30 connections in the pool.
Quite a surprise! that Node.js is loading SQL server on a level similar to actix, while giving ~7 times worse performance. Probably has something to do with knex.js and the way how it handles prepared statements, though the goal of this post was to compare typical use. This leads to final summary
Actix-web and Rust ecosystem are a good fit for developing efficient web services, requiring ~6 times less CPU power and less memory it would allow significant 75%-95% runtime cost saving with just basic not optimized setup.
The goal of this post was to compare typical use and it does not account for deep fine tuning like node’s GC settings or using low level SQL libraries. There are opportunities to optimize Node.js solution and probably minimize the gap, though it will require significant effort from engineer and from other side same effort might be applied optimizing Rust solution too. Though in the next post I am going to use node-postgres and tokio-postgres directly and see if Node code can get closer to Actix-web.
Implementation notes
I have spent about same time on JavaScript and Rust implementations. With JavaScript most of the time was spent fixing issues in runtime, for instance there was race condition bug: I forgot that JavaScript const does not mean immutable and it appeared knex was mutating query object from all the concurrent requests:
const tasks = knex.from('tasks').innerJoin('workers as assignee', 'assignee.id', 'tasks.assignee_id' );
Code appeared to work nicely in local environment, though lead to huge SQL queries under the load. Debugging took some time but fix was simple .
I spent significantly less time fixing Rust part as it provided type safety warranties for request and response formats out of the box, types validation for database queries and concurrency safety.
It was quite a surprising that Rust and JavaScript code appeared to be of similar complexity, for instance below are implementations of main database handlers:
Rust implementation takes more space for types definition, though those also provide very useful checks in runtime. For instance GetTasksQuery
type also validates that all supplied GET
request query parameters are matching their required type and matching the type required by the database query. JavaScript route handler required 2 explicit checks in the code to provide such safety just in this case.
One more thing to note, Rust server handles data initialization
as tokio-postgres supports PSQL COPY ... FROM BINARY
statement, this allows to generate and load 100k records in 20 seconds. Texts are generated randomly with with markov chain built from the tale of the ring of Gyges
which would emulate regular text load for search index.
P.S. Some generated descriptions are really funny, like this one
no man can be imagined to be a great proof that a man would ever submit to such an iron nature that he would be mad if he were no longer present
Thanks Darin https://github.com/dowwie/ , Michael https://github.com/bikeshedder and Andrey https://github.com/andre1sk for some advice on making this post cleaner and even.
If you have found any problem please feel free to drop comment or DM me on twitter or telegram @ dunnock.
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Google總部大揭密
史蒂芬.李維 / 陳重亨 / 財信 / 2011-11
∣如果有一天,Google的搜尋引擎突然故障 ∣GMAIL信件全數消失 ∣Google Maps、Google Docs、Google行事曆等所有雲端服務全面停擺 ∣我們該怎麼辦?! 歷史上像Google如此成功,且廣受推崇的企業可沒幾家。它改變了網路的使用方式,也成了我們生活不可或缺的一部分。這到底是怎麼辦到的? 《連線》雜誌資深主筆史蒂芬.李維史無前例同時取得LS......一起来看看 《Google總部大揭密》 这本书的介绍吧!