Hi. I'd like to present the network VirtIO driver I've been working on for the last few months in my spare time. It is not based on the NetBSD code that is floating around. The attached patch should apply to both recent -current and -stable. Early development was done on VirtualBox, but most has been done on KVM/QEMU. While still a work in progress, the network is mostly feature complete with checksum offloading, TSO, etc, with a couple of caveats: - The VirtIO checksum offload interface doesn't map well to FreeBSD's offloading interface. The network driver is forced to peek inside the incoming/outgoing frames to set the appropriate flags. This could be make more robust with regards to IPv6. - Per Rx/Tx locking needs to be implemented before LRO can happen. - Tx completion interrupts could be basically eliminated when the host supports a certain virtqueue feature. I have a patch that does just that, but haven't had a chance to really test it. I haven't done any real performance testing yet, but here's some informal iperf results between the emulated e1000 device and the VirtIO device on FreeBSD. The host is Fedora 14 running 2.6.35.10-72.fc14.x86_64/qemu-kvm-0.13.0-1.fc14.x86_64. The Linux guest is debain-testing (2.6.32-5-amd64) with a VirtIO network device. The FreeBSD guest is amd64 8-STABLE with both an e1000 and VirtIO network interfaces. Both guests are running on the same host, and the MTU of all interfaces was 1500. Measurements are in Mbits/sec. FreeBSD --> Linux x e1000 + vtnet N Min Max Median Avg Stddev x 6 340 358 348 347.66667 6.2822501 + 6 1529 1540 1538 1535.3333 4.3665394 Difference at 95.0% confidence 1187.67 +/- 6.95891 341.611% +/- 2.0016% (Student's t, pooled s = 5.40987) Linux --> FreeBSD x e1000 + vtnet N Min Max Median Avg Stddev x 11 437 456 449 447.36364 6.4385204 + 11 669 679 671 673.27273 3.5802488 Difference at 95.0% confidence 225.909 +/- 4.6335 50.4979% +/- 1.03573% (Student's t, pooled s = 5.20926) I imagine the lack of LRO makes the FreeBSD receiving results less impressive. Performance is not up to where Linux <--> Linux is at. To use, after applying the patch and compiling, you'll need to load the virtio, virtio_pci, and if_vtnet kernel modules. The virtio module contains the virtqueue transport and a bit of glue code. The virtio_pci module implements the VirtIO PCI interface for device probing and communication with the host. The if_vtnet module contains the network driver. I have a partially complete VirtIO block driver. Hopefully it will be ready for wider testing in a couple of weeks. I'm going to be away from my computer for the next couple of days, I'll get to any email after then.
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:40:10 UTC