Re: Any success stories for HAST + ZFS?

From: Freddie Cash <fjwcash_at_gmail.com>
Date: Fri, 1 Apr 2011 07:18:01 -0700
On Fri, Apr 1, 2011 at 4:22 AM, Pete French <petefrench_at_ingresso.co.uk> wrote:
>> The other 5% of the time, the hastd crashes occurred either when
>> importing the ZFS pool, or when running multiple parallel rsyncs to
>> the pool.  hastd was always shown as the last running process in the
>> backtrace onscreen.
>
> This is what I am seeing - did you manage to reproduce this with the patch,
> or does it fix the issue for you ? Am doing more test now, with only a single
> hast device to see if it is stable. Am Ok to run without mirroring across
> hast devices for now, but wouldnt like to do so long term!

I have not been able to crash or hang the box since applying Mikolaj's patch.

I've tried the following:
  - destroy pool
  - create pool
  - destroy hast providers
  - create hast providers
  - switch from master to slave via hastctl using "role secondary all"
  - switch from slave to master via hastctl using "role primary all"
  - switch roles via hast-carp-switch which does one provider per second
  - import/export pool

I've been running 6 parallel rsyncs for the past 48 hours, getting a
consistent 200 Mbps of transfers, with just under 2 TB of deduped data
in the pool, without any lockups.

So far, so good.
-- 
Freddie Cash
fjwcash_at_gmail.com
Received on Fri Apr 01 2011 - 12:18:03 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:13 UTC