TCP server app performance

From: Honda Michio <micchie_at_sfc.wide.ad.jp>
Date: Sun, 12 Aug 2018 18:50:17 +0200
Hi,

I'm measuring TCP server app performance using my toy web server.
It just accept TCP connections and responds back HTTP OK to the clients.
It monitors sockets using kqueue, and processes each ready descriptor using
a pair of read() and write(). (in more detail, it's
https://github.com/micchie/netmap/tree/paste/apps/phttpd)

Using 100 persistent TCP connections (the client sends 44 B HTTP GET and
the server responds with 151 B of HTTP OK) and a single CPU core, I only
get 152K requests per second, which is 2.5x slower than Linux that runs the
same app  (except that it uses epoll instead of kqueue).
I cannot justify this by myself. Does anybody has some intuition about how
much FreeBSD would get with such workloads?
I tried disabling TCP delayed ack and changing interrupt rates, but no
significant difference was observed.

I use FreeBSD-CURRENT with GENERIC-NODEBUG (git commit hash: 3015145c3aa4b).
For hardware, the server has Xeon Silver 4110 and Intel X540 NIC (activate
only a single queue as I test with a single CPU core). All the offloadings
are disabled.

Cheers,
- Michio
Received on Sun Aug 12 2018 - 14:50:25 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:17 UTC