On 8/12/18 9:50 AM, Honda Michio wrote: > Hi, > > I'm measuring TCP server app performance using my toy web server. > It just accept TCP connections and responds back HTTP OK to the clients. > It monitors sockets using kqueue, and processes each ready descriptor using > a pair of read() and write(). (in more detail, it's > https://github.com/micchie/netmap/tree/paste/apps/phttpd) > > Using 100 persistent TCP connections (the client sends 44 B HTTP GET and > the server responds with 151 B of HTTP OK) and a single CPU core, I only > get 152K requests per second, which is 2.5x slower than Linux that runs the > same app (except that it uses epoll instead of kqueue). > I cannot justify this by myself. Does anybody has some intuition about how > much FreeBSD would get with such workloads? > I tried disabling TCP delayed ack and changing interrupt rates, but no > significant difference was observed. > > I use FreeBSD-CURRENT with GENERIC-NODEBUG (git commit hash: 3015145c3aa4b). > For hardware, the server has Xeon Silver 4110 and Intel X540 NIC (activate > only a single queue as I test with a single CPU core). All the offloadings > are disabled. I hope hw L3/L4 checksumming is still on? Are your results similar to what you get with 100 (same number as your test clients) netperf's doing TCP_RR on this setup, or wildly different? Regards, NavdeepReceived on Tue Aug 14 2018 - 17:30:11 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:17 UTC