Re: vlans + bridging is "interesting"

From: John-Mark Gurney <jmg_at_funkthat.com>
Date: Fri, 24 Nov 2017 22:59:25 -0800
Rodney W. Grimes wrote this message on Fri, Nov 24, 2017 at 18:26 -0800:
> > I decided to try to run some bhyve VM's on my machine and bridge
> > them to a guest vlan on my main interface.  I also want to support
> > running bhyve VM's on the untagged part of the interface as well
> > (this is the key problem as I'll describe later).
> > 
> > I configure it as you'd expect.  Bridge the main interface em0, and
> > put the local IP's on the bridge0.  Then I added an interface em0.14
> > that untags packets from em0, and added it to bridge1 along w/ a tap0
> > for the VM.  This does not work.  Packet goes out and comes back and
> > is observed on em0, but never appears on either em0.14 or bridge1.
> > 
> > After seeing: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=139268
> > 
> > I decide to look on bridge0, and see the tagged vlan packet on that
> > interface.  I attempted to add bridge0 as the vlandev for em0.14, but
> > that doesn't work:
> > #ifconfig em0.14 vlan 14 vlandev bridge0
> > ifconfig: SIOCSETVLAN: Protocol not supported
> > 
> > So, I did finally get things working by using epair.  I added an epair
> > to the bridge, and that allows me to untag the packet, and pass on to
> > bridge1.
> > 
> > I have not attempted to use the patch in 139268, but if people think
> > it is an acceptable solution (with patch, if I set LINK0, it should work
> > w/ original configuration), I'll test and commit the patch.
> > 
> > Otherwise, please submit another fix.
> 
> I am also experiencing difficulties with vlan +briding +bhyve.  It
> seems the host that can talk just fine out a trunked em0 interface
> using vlan32 and vlan34 to all my other hardware can NOT talk to
> my bhyve guests.  Those bhyve guests can also talk out that
> same interface to other hardware, but they are being passed in
> the trunked interface, ie direct tap of bridge of em0 and the
> vlan tagging/untagging is being done inside the guest.
> 
> All the guests can talk to each other and they can all talk
> to real hardware that is via the em0 hardware, same for the
> host, but the host can not talk to the guests nor the guests
> to the host.

This is probably related.  I'm going to take a stab at your config,
so correct me if I'm wrong:
bridge0 w/ em0 & tapX
vlan32 on em0 vlan 32
vlan34 on em0 vlan 34

If this is the case, you're running into the same issue that I'm running
into...  The issue is that when a packet comes in on em0, it is
forwarded directly to bridge0, bypassing the vlan interfaces...

If you do what I did above, which is add an epair interface to the
bridge, and then add the vlan's off the epair interface, it will work...
As w/ the epair you are effectively simulating what your VM does, and
doing the encap/decap as the "VM" in the host...

The issue is that packets make it out em0 properly wrapped, but they'll
never make it back to the vlan32 interface, em0 forwards it directly to
bridge0, bypassing the vlan32 interface, and the bridge0 has no where to
deliver the packet, and it gets dropped...

The reason the vms work is that as you're doing decap in the VM, the
encapsulated packets are making it successfully to them, and their
encapsulated replies are making it safely back to the switch...

> My guess is that the arp's are not being seen by the bridge
> cause they are wrapping in vlan tags thus the bridge
> never learns all the mac addresses, but this is just a
> guess.

I finally figured this out w/ tcpdump, as tcpdump was showing the
packets going out em0.14 (in my case), but the reply was never making
it back to em0.14.  I was seeing it on em0 w/ "tcpdump -i em0 vlan 14",
and then, when I ran "tcpdump -i bridge0 vlan 14", I saw this missing
packet, which is how I decided to come up w/ the epair "solution"...

-- 
  John-Mark Gurney				Voice: +1 415 225 5579

     "All that I will do, has been done, All that I have, has not."
Received on Sat Nov 25 2017 - 05:59:27 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:13 UTC