Re: newfs limits? 10TB filesystem max?

From: Eric Anderson <anderson_at_centtech.com>
Date: Fri, 18 Feb 2005 09:09:57 -0600
Don Lewis wrote:
[..snip..]
>>>23436833440, 23437209760, 23437586080, 23437962400, 23438338720,newfs: 
>>>wtfs: 65536 bytes at sector 23438715040: Cannot allocate memory
>>>
>>>But:
>>>newfs -U -s 23438338720 /dev/vinum/plex/raid.p0 
>>>works.. So I'm losing the last part of my partition..
>>
>>I'm guessing you are hitting the process datasize limit with newfs.  You
>>should be able to raise it a bit from the default.  Be warned, that fsck
>>has much higher memory requirements so recovery may be difficult if not
>>impossiable without a 64-bit machine.
> 
> 
> I don't know of any reason that newfs would need a lot of memory.  I
> would think that it's memory usage would be independent of file system
> size.
> 
> I just looked at the code, and the error message seems to be triggered
> by bwrite() in libufs failing.  There is a potential pair of calls in
> malloc()/free() in bwrite(), but I think the more likely problem is that
> pwrite() is failing.
> 
> I seem to to recall seeing a recent kernel commit that changed an ENOMEM
> error return to something else like EFBIG or ENOSPC.


Anything I can do to help debug this?

Eric


-- 
------------------------------------------------------------------------
Eric Anderson        Sr. Systems Administrator        Centaur Technology
I have seen the future and it is just like the present, only longer.
------------------------------------------------------------------------
Received on Fri Feb 18 2005 - 14:10:12 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:28 UTC