Re: Logical volume management

From: Peter Jeremy <PeterJeremy_at_optushome.com.au>
Date: Sat, 19 Nov 2005 16:02:16 +1100
On Fri, 2005-Nov-18 12:59:22 +0000, Brian Candler wrote:
>On Fri, Nov 18, 2005 at 06:39:09AM -0600, Eric Anderson wrote:
>> - volume migration (online)
>
>Perhaps there are two primitive operations:
>
>1. move an individual chunk from device A to device B
>   (LVM calls these chunks "extents" BTW, which is probably a better name;
>   it has a long history going back to mainframe storage systems)
>
>2. move an entire volume

Keep in mind that these primitives need to be atomic as seen by the LVM
consumer (FS):  If the system crashes partway through, the volume still
needs to be in a consistent state.

>I think that requires interaction with the underlying filesystem if it is to
>happen automatically. I'd be quite happy with a two-step process: increase
>the size of a volume manually, then run growfs to make the filesystem fit
>the new space.

Note that growfs won't work on a mounted filesystem.  The result is also
not as well laid out as a newfs'd filesystem of the new size.

>> It would be nice to be able to create an arbitrarily large volume, which 
>>  only uses these volume blocks (you call them chunks) as the volume 
>> gets filled.  This way, you could create a 2Tb volume, with only a 
>> single 200Gb drive, then as you neared the 200Gb used mark, you could 
>> add another disk, and grow on to it

I think DEC/Compaq/HP AdvFS is a much nicer approach: You have
"domains" which are effectively RAID-0 logical volumes (you can add or
remove the underlying disks at will).  Within each domain, you create
a number of filesets which can have optional soft and/or hard quota
limits.  In theory, you can add or remove disks and migrate data
online.  (In practice, I've found that this process isn't as robust
as is desirable).

Given the limitations on growfs and the lack of a shrinkfs, Eric's
approach is probably a fairly easy way to make UFS more flexible.

>Maybe. However I don't see the advantage of this compared to creating
>a 200GB volume, and then as it nears getting full expand it to 400GB,
>and so on.

A filesystem newfs's to 400GB will be more efficient than a filesystem
that was newfs'd to 200GB and growfs'd to 400GB.

>- df will lie. You may think your filesystem is only 10% full, when in fact
>it is about to fail due to lack of space.

This is fairly critical.  [f]statfs(2) would need to be expanded to report
both physical and logical allocations.

>- partitioning has an important administrative use, to enforce limits.

This can be easily avoided by implementing filesystem quotas.  You could
newfs /var to 2TB and quota limit it to occupy 200MB.  If you get more
space later you can change the quota.

The other disadvantages I see are:
- All the cylinder groups headers are pre-allocated.  AFAIK, UFS2 avoids
  pre-allocating the inodes but still reserves the space for them.  A
  large virtual filesystem will have significant overheads even if it
  only has a small amount of real data.
- UFS assumes that all the space is physically available and allocates
  blocks so as to (hopefully) maximize performance.  This is likely to
  lead to significant fragmentation with lots of partially utilised
  data chunks.

-- 
Peter Jeremy
Received on Sat Nov 19 2005 - 04:03:25 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:38:48 UTC