On Wed, 1 Nov 2017 00:27:50 +0000, Rick Macklem wrote: > Rodney W. Grimes wrote: > [stuff snipped] >> I wrote: >>> Btw, NFS often causes this because... >>> - Typically TSO is limited to a 64K packet (including TCP/IP and MAC headers). >>> - When NFS does reading/writing, it will do 64K + NFS, TCP/IP and MAC headers >>> for an RPC (or a multiple of 64K like 128K). >>> --> This results in tcp_output() generating a 64K TSO segment followed by a >>> small TCP segment (since another RPC message doesn;t usually end up >>> queued quickly enough to fill in the rest of the second TCP segment). >>> - Also, at the end of file, you can get an RPC which is just under 64K including >>> NFS and TCP/IP headers. (The drivers often broke when adding the MAC >>> header bumped this case to > 64K.) >>> >>> Thanks go to Yuri for diagnosing this, rick >> >> Just a thought, not asking anyone to write one :-) >> >> It would be handy to have some sh(1) scripts that can exercise this bug >> case and have it readily avaliable to network driver authors for testing >> the tso (or other large segment) code. > You can't easily reproduce this from userland. It depends on the way NFS fills in > the mbuf chain for I/O RPCs. (iSCSI does something similar.) > > However, if your shell script does an NFS mount and the writes/reads a > file just under 64K in size on the mount... Yes, I should be able to test this, it's not a production in any case. And just in case, it's not related to nfs, sorry for jumping to guesses, Rick, scp behaves the same, giving a fair transfer rate of 10kbps, and 10MBps with that change backed out.Received on Tue Oct 31 2017 - 23:47:06 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:13 UTC