Re: mlx5(4) jumbo receive

From: Ben RUBSON <ben.rubson_at_gmail.com>
Date: Thu, 26 Apr 2018 08:43:01 +0200
On 26 Apr 2018, Rick Macklem wrote:

> Ryan Stone wrote:
>> On Tue, Apr 24, 2018 at 4:55 AM, Konstantin Belousov  
>> >><kostikbel_at_gmail.com>wrote:
>>> +#ifndef MLX5E_MAX_RX_BYTES
>>> +#define        MLX5E_MAX_RX_BYTES MCLBYTES
>>> +#endif
>>
>> Why do you use a 2KB buffer rather than a PAGE_SIZE'd buffer?
>> MJUMPAGESIZE should offer significantly better performance for jumbo
>> frames without increasing the risk of memory fragmentation.
> Actually, when I was playing with using jumbo mbuf clusters for NFS, I  
> was able
> to get it to fragment to the point where allocations failed when mixing  
> 2K and
> 4K mbuf clusters.
> Admittedly I was using a 256Mbyte i386 and it wasn't easily reproduced, but
> it was possible.
> --> Using a mix of 2K and 4K mbuf clusters can result in fragmentation,  
> although
>       I suspect that it isn't nearly as serious as what can happen when using 9K
>       mbuf clusters.

I used to face the fragmentation issue easily with MTU set to 9000
(x86_64 / 64GB / Connect-X/3).
I then decreased the MTU until 9K mbufs are not more used, to 4072 bytes  
then.
The other used interface of this 2-ports card is set with a 1500 MTU.
Until now (about a year later), no more issue.

# vmstat -z | grep mbuf
ITEM             SIZE     LIMIT   USED   FREE          REQ FAIL SLEEP
mbuf_packet:      256, 26080155, 16400,  9652,   999757417,   0,    0
mbuf:             256, 26080155, 16419, 11349, 85322301444,   0,    0
mbuf_cluster:    2048,  4075022, 26052,   550,     1059817,   0,    0
mbuf_jumbo_page: 4096,  2037511, 16400,  9522, 45863599682,   0,    0
mbuf_jumbo_9k:   9216,   603707,     0,     0,           0,   0,    0
mbuf_jumbo_16k: 16384,   339585,     0,     0,           0,   0,    0

Here's my experience :)

Ben
Received on Thu Apr 26 2018 - 04:43:01 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:15 UTC