Hi Wojciech, let me start with pointing out that ZFS is still an experimental feature. Secondly, this is the wrong list, because ZFS is a feature of FreeBSD-CURRENT. Adding freebsd-current to CC, maybe one of the ZFS developers can give their $0.02. On 11/08/07, Wojciech Puchar <wojtek_at_wojtek.tensor.gdynia.pl> wrote: > just ended testing. > > after having all my data (test system fortunately) on ZFS including root, > i lost /boot partition, which was on pendrive to make testing easier. > > well - no problem - i've started my normal 6.2 syste, got bootonly CD, > removed mfsroot, added (as on ZFS system) > vfs.root.mountfrom="zfs:tank/root", put tu pendrive, bsdlabel -B etc.. If I understand you correctly you tried to use ZFS with FreeBSD 6.2. Won't work. > on the other hand it's faster over UFS on small files but not that faster. > it uses HUGE amount of RAM. ZFS probably isn't for Joe average user. No offense ment with this, but ZFS does has a nice set of features, but not many are needed for every days work. At least if you're on a workstation. > > it's set copies=n is a joke. > you have no warranty where are the copies (often on the same disk). Yes, because copies=n isn't there to protect against an entire disk failing. It protects data against block failures. So if an individual block on the disk wents bad, as it's most often the case when a disk starts to die, you have some more backups with the correct checksum. > zpool scrub DOES NOT move copies to other disk from the same when other is > made available!! No, because it's not dedicated to do this. AFAIK zfs does this automagically every time you attach a disk and add it to a certain zpool. scrubbing is in cases where disk faults have been found. It means that the ZFS compares the data and it's checksum, thus locating any trouble and fixes it. > raidz can't be expanded > > cache flushing CAN NOT be disable for selected device, only for > everything. > > my USB-IDE converter make doesn't allow it, but my 2.5" does! > i use USB-IDE converted with disk as a backup. with ZFS it's impossible > unless i will turn off flushing for everything - losing it's important > adventage. > > > disks based pool (no mirror/raidz) won't start AT ALL with one element > unattached!!! EVEN if everything has copies>1 !!! I don't know your setup but for me it works fine. I'm currently at the Chaos Communication Camp, next to me is an AthlonXP powered with FreeBSD-CURRENT installed. 3x400GB HDD using ZFS, giving 732GB net capacity. 2GB RAM. We had a few issues with the hardware. One of the disks is unstable, leading to crashes eventually. I started the system without the disk, I changed the disks position, moved them from a PCI Controller to the onboard controller and stuff. ZFS came up fine. I'm really impressed with ZFS and it's features. The system is pretty busy, we've 15 users max with 1MBit/sec allowed. Firewall states a throughput of 90MBit/sec upstream, saturating the 100MBit/sec NIC. With ZFS you're making use of all your HW, CPU, RAM, PCI Bus etc. So if something 's wrong with your HW you'll notice. But that doesn't necessarily mean that it's related to ZFS. For example I encountered a poor system performance, with lots of interrupts. Tried to tweak ZFS a bit, didn't make a difference. Then I took my "primary slave disk" from the PCI Controller, attached it to onboard primary master - and things went out fine. Just as a side note: If your dmesg reports something like this: atapci0: <ITE IT8212F UDMA133 controller> port 0xdc00-0xdc07,0xd800-0xd803,0xd400-0xd407,0xd000-0xd003,0xcc00-0xcc0f irq 10 at device 6.0 on pci0 ...remove it. ;-) ChristianReceived on Sat Aug 11 2007 - 13:10:09 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:16 UTC