Allan Jude wrote this message on Sat, Nov 14, 2015 at 11:53 -0500: > On 2015-11-14 02:47, John-Mark Gurney wrote: > > Allan Jude wrote this message on Thu, Nov 12, 2015 at 17:57 -0500: > >> On 2015-11-12 12:56, John-Mark Gurney wrote: > >>> Allan Jude wrote this message on Thu, Nov 12, 2015 at 12:15 -0500: > >>>> On 2015-11-11 19:06, Slawa Olhovchenkov wrote: > >>>>> On Wed, Nov 11, 2015 at 01:32:27PM -0800, Bryan Drewery wrote: > >>>>> > >>>>>> On 11/10/2015 1:42 AM, Dag-Erling Smørgrav wrote: > >>>>>>> I would also like to remove the NONE cipher > >>>>>>> patch, which is also available in the port (off by default, just like in > >>>>>>> base). > >>>>>> > >>>>>> Fun fact, it's been broken in the port for several months with no > >>>>>> complaints. It was just reported and fixed upstream in the last day and > >>>>>> I wrote in a similar fix in the port. That speaks a lot about its usage > >>>>>> in the port currently. > >>>>> > >>>>> I am try using NPH/NONE with base ssh and confused: don't see > >>>>> performance rise, too complex to enable and too complex for use. > >>>> > >>>> I did a few quick (and dirty) benchmarks and it shows that the NONE > >>>> cipher definitely makes a difference. Version of OpenSSL also seems to > >>>> make a difference, as one might expect. > >>>> > >>>> Note: openssh from ports seems to link against both base and ports > >>>> libcrypto, I am still trying to make sure this isn't corrupting my > >>>> benchmark results. > >>> > >>> You don't need the aesni.ko module loaded for OpenSSL (which is how > >>> OpenSSH uses most crypto algos) to use AES-NI.. > >>> > >>> Also, do you set any sysctl's to play w/ the buffer sizes or anything? > >>> > >>>> I am still debugging my dummynet setup to be able to prove that HPN > >>>> makes a difference (but it does). > >>> > >>> Does my example on the page not work for you? > >>> > >>>> https://wiki.freebsd.org/SSHPerf > >>> > >> > >> I found that when I set even 5ms of delay with dummynet, bandwidth over > >> the LAN drops more than it should. Dummynet is limiting the rate rather > >> than just adding the delay. I am investigating. > >> > >> I found this document: > >> http://www.cs.unc.edu/~jeffay/dirt/FAQ/hstcp-howto.pdf > >> > >> Which is from the 6.x era, but suggests: > >> > >> "One subtle bug exists in the stock Dummynet implementation that should > >> be corrected for experiments. When a packet arrives in dummynet it is > >> shoved into a queue which limits the bandwidth a TCP flow may use. Upon > >> exit from the queue, the packet is transferred to a pipe where it sits > >> for any configured amount of delay time and might possibly be dropped > >> depending on the loss probability. Once the delay time has passed, the > >> packet is released to ip output." > >> > >> May be the cause of my problem > > > > Ahhh, probably need to adjust: > > net.inet.ip.dummynet.pipe_byte_limit: 1048576 > > net.inet.ip.dummynet.pipe_slot_limit: 100 > > > > But even w/ the above limits and 5ms, you should still be able to push > > 200MB/sec... > > I worked with Hiren and some of his dtrace magic and figured out that > dummynet was not my issue. I didn't end up needing to change the > dummynet pipe slot/byte limit in order to get the full 10gig/sec even > with 100ms delay from dummynet. > > You just need to adjust: > > net.inet.tcp.sendbuf_max=BDP > net.inet.tcp.recvbuf_max=BDP > > kern.ipc.maxsockbuf= ( BDP * (2048+256) ) / 2048 > > for a 50ms RTT: > > BDP = 10gbps * .05 = ~60mb I forgot to include _max adjustments on the page (the maxsockbuf was there), but in all of my tests, I can't get close to that.. In my case, I can demonstrate 20MB/sec+ over the link, and w/ a 100ms RTT, that'd be a 2MB buffer size, and even when I increase these to 8MB, and increase kern.ipc.maxsockbuf to 8MB too (otherwise _max is meaningless), I still get 1.5MB/sec, not even close... I do notice w/ nc that it'll slowly increase, and then suddenly back off, just to closely increase again... I wonder if there is some issue w/ TSO or tap that is causing issues... > It can also greatly help to increase: > net.inet.tcp.abc_l_var > > Which is how many additional segments the CWNDs is incremented by each > RTT during slow-start. > > > I am still working on my set of benchmarks to show what different HPN > makes with different RTT values, as well as what might be required to > achieve max throughput for SSH over both LAN and the internet. > > (For my company, we regularly transmit 500GB ZFS datasets over the > public internet on 1gbps or 2x1gbps connections) -- John-Mark Gurney Voice: +1 415 225 5579 "All that I will do, has been done, All that I have, has not."Received on Sat Nov 14 2015 - 17:18:35 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:00 UTC