CTF: geom gate network patch

From: John-Mark Gurney <jmg_at_funkthat.com>
Date: Mon, 3 Nov 2014 10:38:15 -0800
John-Mark Gurney wrote this message on Fri, Oct 17, 2014 at 09:58 -0700:
> Sourish Mazumder wrote this message on Fri, Oct 17, 2014 at 17:34 +0530:
> > I am planning to use geom gate network for accessing remote disks. I set up
> > geom gate as per the freebsd handbook. I am using freebsd 9.2.
> > I am noticing heavy performance impact for disk IO when using geom gate. I
> > am using the dd command to directly write to the SSD for testing
> > performance. The IOPS gets cut down to 1/3 when accessing the SSD remotely
> > over a geom gate network, compared to the IOPS achieved when writing to the
> > SSD directly on the system where the SSD is attached.
> > I thought that there might be some problems with the network, so decided to
> > create a geom gate disk on the same system where the SSD is attached. This
> > way the IO is not going over the network. However, in this use case I
> > noticed the IOPS get cut down to 2/3 compared to IOPS achieved when writing
> > to the SSD directly.
> > 
> > So, I have a SSD and its geom gate network disk created on the same node
> > and the same IOPS test using the dd command gives 2/3 IOPS performance for
> > the geom gate disk compared to running the IOPS test directly on the SSD.
> > 
> > This points to some performance issues with the geom gate itself.
> 
> Not necessarily...  Yes, it's slower, but at the same time, you now have
> to run lots of network and TCP code in addition to the IO for each and
> every IO...
> 
> > Is anyone aware of any such performance issues when using geom gate network
> > disks? If so, what is the reason for such IO performance drop and are there
> > any solutions or tuning parameters to rectify the performance drop?
> > 
> > Any information regarding the same will be highly appreciated.
> 
> I did some work at this a while back... and if you're interested in
> improving performance and willing to do some testing... I can send you
> some patches..
> 
> There are a couple issues that I know about..
> 
> First, ggate specificly sets the buffer sizes, which disables the
> autosizing of TCP's window.. This means that if you have a high latency,
> high bandwidth link, you'll be limited to 128k / rtt of bandwidth.
> 
> Second is that ggate isn't issueing multiple IOs at a time.  This means
> that any NCQ or tagging isn't able to be used, where as when running
> natively they can be used...

I've attached a patch I would like other ggate users to test and
verify that there aren't bugs, or performance regressions by using
this patch.

The patch is also available at:
https://www.funkthat.com/~jmg/patches/ggate.patch

-- 
  John-Mark Gurney				Voice: +1 415 225 5579

     "All that I will do, has been done, All that I have, has not."
Received on Mon Nov 03 2014 - 17:38:24 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:53 UTC