On 31/03/2008, Scott Long <scottl_at_samsco.org> wrote: > Ivan Voras wrote: > > Most of new hardware RAID controllers offer stripe sizes of 128K, 256K > > and some also have 512K and 1M stripes. In the simplest case of RAID0 of > > two drives, knowing that the data is striped across the drives and that > > FreeBSD issues IO request of at most 64K, is it useful to set stripe > > sizes to anything larger than 32K? I suppose something like TCQ would > > help the situation but does anyone know how is this situation usually > > handled on the RAID controllers? > > Large I/O sizes and large stripe sizes only benefit benchmarks and a > narrow class of real-world applications. Like file servers on gigabit networks serving large files? :) > Large stripes have the > potential to actually hurt RAID-5 performance since they make it > much harder for the card to a full stripe replacement instead of a > read-modify-xor-write. This is logical. > I hate to be all preachy and linux-like and tell you want you need or > don't need, but in all honesty, large i/o's and stripes usually > don't help typical filesystem-based mail/squid/mysql/apache server > apps. I do have proof-of-concept patches to allow larger I/O's for > selected controllers on 64-bit FreeBSD platforms, and I intend to clean > up and commit those patches in the next few weeks (no, I'm not ready for > nor looking for testers at this time, sorry). I'm not (currently) nagging for large IO request patches :) I just want to understand what is happening currently if the stripe size is 256 kB (which is the default at least on IBM ServeRAID 8k, and I think recent CISS controllers have 128 kB), and the OS chops out IO in 64k blocks. I have compared Linux performance and FreeBSD performance and I can't conclude from that - for FreeBSD it's not like all requests (e.g. 4 64 kB requests) go to a single drive at a time, and it's not like they always get split.Received on Mon Mar 31 2008 - 19:09:05 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:29 UTC