On 01/20/12 13:08, Nikolay Denev wrote: > On 20.01.2012, at 12:51, Alexander Motin<mav_at_freebsd.org> wrote: > >> On 01/20/12 10:09, Nikolay Denev wrote: >>> Another thing I've observed is that active/active probably only makes sense if you are accessing single LUN. >>> In my tests where I have 24 LUNS that form 4 vdevs in a single zpool, the highest performance was achieved >>> when I split the active paths among the controllers installed in the server importing the pool. (basically "gmultipath rotate $LUN" in rc.local for half of the paths) >>> Using active/active in this situation resulted in fluctuating performance. >> >> How big was fluctuation? Between speed of one and all paths? >> >> Several active/active devices without knowledge about each other with some probability will send part of requests via the same links, while ZFS itself already does some balancing between vdevs. >> >> -- >> Alexander Motin > > I will test in a bit and post results. > > P.S.: Is there a way to enable/disable active-active on the fly? I'm > currently re-labeling to achieve that. No, there is not now. But for experiments you may achieve the same results by manually marking as failed all paths except one. It is not dangerous, as if that link fail, all other will resurrect automatically. -- Alexander MotinReceived on Fri Jan 20 2012 - 10:30:17 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:23 UTC