Proper network configuration for a KVM guest to be on the same networks at the host

Steve Madsen asked:

I am running a Debian Linux server on Lenny. Within it, I am running another Lenny instance using KVM. Both servers are externally available, with public IPs, as well as a second interface with private IPs for the LAN. Everything works fine, except the VM sees all network traffic as originating from the host server. I suspect this might have something to do with the iptables-based firewall I’m running on the host.

What I’d like to figure out is: how to I properly configure the host’s networking such that all of these requirements are met?

  1. Both host and VMs have 2 network interfaces (public and private).
  2. Both host and VMs can be independently firewalled.
  3. Ideally, VM traffic does not have to traverse the host firewall.
  4. VMs see real remote IP addresses, not the host’s.

Currently, the host’s network interfaces are configured as bridges. eth0 and eth1 do not have IP addresses assigned to them, but br0 and br1 do.

/etc/network/interfaces on the host:

# The primary network interface
auto br1
iface br1 inet static
    address 24.123.138.34
    netmask 255.255.255.248
    network 24.123.138.32
    broadcast 24.123.138.39
    gateway 24.123.138.33
    bridge_ports eth1
    bridge_stp off

auto br1:0
iface br1:0 inet static
    address 24.123.138.36
    netmask 255.255.255.248
    network 24.123.138.32
    broadcast 24.123.138.39

# Internal network
auto br0
iface br0 inet static
    address 192.168.1.1
    netmask 255.255.255.0
    network 192.168.1.0
    broadcast 192.168.1.255
    bridge_ports eth0
    bridge_stp off

This is the libvirt/qemu configuration file for the VM:

<domain type='kvm'>
  <name>apps</name>
  <uuid>636b6620-0949-bc88-3197-37153b88772e</uuid>
  <memory>393216</memory>
  <currentMemory>393216</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch='i686' machine='pc'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <disk type='file' device='cdrom'>
      <target dev='hdc' bus='ide'/>
      <readonly/>
    </disk>
    <disk type='file' device='disk'>
      <source file='/raid/kvm-images/apps.qcow2'/>
      <target dev='vda' bus='virtio'/>
    </disk>
    <interface type='bridge'>
      <mac address='54:52:00:27:5e:02'/>
      <source bridge='br0'/>
      <model type='virtio'/>
    </interface>
    <interface type='bridge'>
      <mac address='54:52:00:40:cc:7f'/>
      <source bridge='br1'/>
      <model type='virtio'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target port='0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes' keymap='en-us'/>
  </devices>
</domain>

Along with the rest of my firewall rules, the firewalling script includes this command to pass packets destined for a KVM guest:

# Allow bridged packets to pass (for KVM guests).
iptables -A FORWARD -m physdev --physdev-is-bridged -j ACCEPT

(Not applicable to this question, but a side-effect of my bridging configuration appears to be that I can’t ever shut down cleanly. The kernel eventually tells me “unregister_netdevice: waiting for br1 to become free” and I have to hard reset the system. Maybe a sign I’ve done something dumb?)

My answer:


You bridged your VMs to the wrong interface. They should be bridged to the network interface that connects to the outside world (br1 in your case).

Keep in mind that each VM should also have its IP address set in the guest, not on the host.


View the full question and answer on Server Fault.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.