Pawel, Quick question. Is it typical to ZFS to run over 100 kthreads? I see a lot of spa_*s in ps output. Other bits are: bland_at_nest:~$zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT tank 4,19G 295M 3,90G 6% ONLINE - bland_at_nest:~$zfs list NAME USED AVAIL REFER MOUNTPOINT tank 295M 3,83G 18K /tank tank/ports 294M 3,83G 294M /usr/ports tank/tmp 535K 3,83G 535K /tmp Thanks, Alexander. Pawel Jakub Dawidek wrote: > Ok, ZFS is now in the tree, what's now? Below you'll find some > instructions how to quickly make it up and running. > > First of all you need some disks. Let's assume you have three spare SCSI > disks: da0, da1, da2. > > Add a line to your /etc/rc.conf to start ZFS automatically on boot: > > # echo 'zfs_enable="YES"' >> /etc/rc.conf > > Load ZFS kernel module, for the first time by hand: > > # kldload zfs.ko > > Now, setup one pool using RAIDZ: > > # zpool create tank raidz da0 da1 da2 > > It should automatically mount /tank/ for you. > > Ok, now put /usr/ on ZFS and propose some file systems layout. I know > you probably have some files already, so we will work on /tank/usr > directory and once we ready, we will just change the mountpoint to /usr. > > # zfs create tank/usr > > Create ports/ file system and enable gzip compression on it, because > most likely we will have only text files there. On the other hand, we > don't want to compress ports/distfiles/, because we keep compressed > stuff already in-there: > > # zfs create tank/usr/ports > # zfs set compression=gzip tank/usr/ports > # zfs create tank/usr/ports/distfiles > # zfs set compression=off tank/usr/ports/distfiles > > (You do see how your life is changing, don't you?:)) > > Let's create home file system, my own home/pjd/ file system. I know we > use RAIDZ, but I want to have directory where I put extremly important > stuff, you I'll define that each block has to be stored in tree copies: > > # zfs create tank/usr/home > # zfs create tank/usr/home/pjd > # zfs create tank/usr/home/pjd/important > # zfs set copies=3 tank/usr/home/pjd/important > > I'd like to have directory with music, etc. that I NFS share. I don't > really care about this stuff and my computer is not very fast, so I'll > just turn off checksumming (this is only for example purposes! please, > benchmark before doing it, because it's most likely not worth it!): > > # zfs create tank/music > # zfs set checksum=off tank/music > # zfs set sharenfs=on tank/music > > Oh, I almost forget. Who cares about access time updates? > > # zfs set atime=off tank > > Yes, we set it only on tank and it will be automatically inherited by > others. > > Will be also good to be informed if everything is fine with our pool: > > # echo 'daily_status_zfs_enable="YES"' >> /etc/periodic.conf > > For some reason you still need UFS file system, for example you use ACLs > or extended attributes which are not yet supported by our ZFS. If so, > why not just use ZFS to provide storage? This way we gain cheap UFS > snapshots, UFS clones, etc. by simply using ZVOLs. > > # zfs create -V 10g tank/ufs > # newfs /dev/zvol/tank/ufs > # mount /dev/zvol/tank/ufs /ufs > > # zfs snapshot tank/ufs_at_20070406 > # mount -r /dev/zvol/tank/ufs_at_20070406 /ufs20070406 > > # zfs clone tank/ufs_at_20070406 tank/ufsok > # fsck_ffs -p /dev/zvol/tank/ufsok > # mount /dev/zvol/tank/ufsok /ufsok > > Want to encrypt your swap and still use ZFS? Nothing more trivial: > > # zfs create -V 4g tank/swap > # geli onetime -s 4096 /dev/zvol/tank/swap > # swapon /dev/zvol/tank/swap.eli > > Trying to do something risky with your home? Snapshot it first! > > # zfs snapshot tank/home/pjd_at_justincase > > Turns out it was more stupid than risky? Rollback your snapshot! > > # zfs rollback tank/home/pjd_at_justincase > # zfs destroy tank/home/pjd_at_justincase > > Ok, everything works, we may set tank/usr as our real /usr: > > # zfs set mountpoint=/usr tank/usr > > Don't forget to read zfs(8) and zpool(8) manual pages and SUN's ZFS > administration guide: > > http://www.opensolaris.org/os/community/zfs/docs/zfsadmin.pdf > >Received on Mon Apr 09 2007 - 01:39:24 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:08 UTC