Re: zfs send/recv: STILL invalid Backup Stream

From: Allan Jude <allanjude_at_freebsd.org>
Date: Thu, 24 Jul 2014 16:07:40 -0400
On 2014-07-24 15:57, Larry Rosenman wrote:
> On 2014-07-24 14:53, Mark Martinec wrote:
>> 2014-07-24 21:31, Larry Rosenman wrote:
>>> borg.lerctr.org /home/ler # zxfer -dFkPvs -g 376 -O
>>> root_at_tbh.lerctr.org -R zroot  zroot/backups/TBH
>>> Creating recursive snapshot zroot_at_zxfer_26699_20140724135840.
>>> Checking grandfather status of all snapshots marked for deletion...
>>> Grandfather check passed.
>>> Sending zroot_at_zxfer_26699_20140724135840 to zroot/backups/TBH/zroot.
>>> Sending zroot/ROOT_at_zxfer_26699_20140724135840 to
>>> zroot/backups/TBH/zroot/ROOT.
>>> Sending zroot/ROOT/default_at_zxfer_23699_20140724134435 to
>>> zroot/backups/TBH/zroot/ROOT/default.
>>> Sending zroot/ROOT/default_at_zxfer_26699_20140724135840 to
>>> zroot/backups/TBH/zroot/ROOT/default.
>>>   (incremental to zroot/ROOT/default_at_zxfer_23699_20140724134435.)
>>> Sending zroot/home_at_zxfer_26699_20140724135840 to
>>> zroot/backups/TBH/zroot/home.
>>
>>> Write failed: Cannot allocate memory
>>   ====================================
>>
>>> cannot receive new filesystem stream: invalid backup stream
>>> Error when zfs send/receiving.
>>> borg.lerctr.org /home/ler #
>>>
>>> well that's different.......
>>
>> Sounds familiar, check my posting of today and links therein:
>>
>>   http://lists.freebsd.org/pipermail/freebsd-net/2014-July/039347.html
>>
>> Mark
> I'm not using netgraph to the best of my knowledge....
> and the only fails on the SENDING host are:
> 8 Bucket:                64,      0,      41,    3555,  257774,  11,   0
> 12 Bucket:               96,      0,      96,    2569,  123653,   0,   0
> 16 Bucket:              128,      0,   17195,     506,  215573,   0,   0
> 32 Bucket:              256,      0,     340,    4670,  900638,  50,   0
> 64 Bucket:              512,      0,   10691,     365,  546888,185232,   0
> 128 Bucket:            1024,      0,    3563,     905,  348419,   0,   0
> 256 Bucket:            2048,      0,    2872,     162,  249995,59834,   0
> vmem btag:               56,      0,  192811,   51500,  502264,1723,   0
> 
> 

I regularly use zxfer to transfer 500+ GiB datasets over the internet.
This week I actually replicated a 2.1 TiB dataset with zxfer without issue.

I wonder which thing is running out of memory. Is there a delay while it
is 'running out of memory', or does it fail immediately? Does running
top while it is working on running out of memory reveal anything?

I would expect to use up a lot of memory while doing deduplication, but
not otherwise.

Note: I most often use openssh-portable rather than base ssh for
replication, as I enable the nonecipher to reduce CPU usage, and adjust
the TcpRcvBuf upwards to actually saturate a gigabit over the internet.

-- 
Allan Jude


Received on Thu Jul 24 2014 - 18:07:21 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:51 UTC