Claus Guttesen wrote: >>I've just built an enormous 10TB filesystem. When >>trying to newfs the disk, it bombed with something >>like "cannot allocate memory" after something like >>23xxxxxxxxx sectors.. I noticed disklabel complains >>about disks with more than 2^32-1 sectors not being >>supported.. > > > Aren't you supposed to use gpt(8) to define partitions > larger than 2 TB? No idea - this is the first I've heard of gpt really.. >>Is newfs supposed to be able to work? I've used the >>-s option to newfs to limit my filesystem size to >>the max it would allow, which ends up being >>11350482546 1K blocks, which means I'm only losing a >>couple GB, which is no sweat right now for me, but >>if someone wanted a 20TB filesystem, they'd be >>hosed. > > > Then the question is whether newfs reads > gpt-partitioned disks? From newfs(8): > > Before running newfs the disk must be labeled using > bsdlabel(8). > > How did you create such a huge partition? Your > question is quite interesting, I'm at a > storage-solution which supports LUN's larger than 2.2 > TB. I used vinum to stripe 6 2TB partitions connected to 2 fiber channel disk arrays. Vinum automatically does the bsdlabel part. I was merely wanting to see what bsdlabel had to say about the vinum disk (if anything). Using newfs on it worked as long as I specified a smaller sector count. Eric -- ------------------------------------------------------------------------ Eric Anderson Sr. Systems Administrator Centaur Technology I have seen the future and it is just like the present, only longer. ------------------------------------------------------------------------Received on Wed Feb 16 2005 - 21:00:38 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:28 UTC