I’m researching setting up a dedicated server to run some virtual private servers on (for my friends, hosting some websites, etc.). However, I am stuck as to how to maintain uptime of the guest operating systems in the face of required downtime of the host machine.
By required downtime I am mainly thinking of security updates etc. to the Linux kernel which require a reboot to take affect.
I have a couple of ideas:
migrate all of the guests to a second server for the duration of the reboot.
This is posssible, but seems clumsy and would dump twice as VMs on the second server for a while. If I could automate this, it seems more feasible. Downside: requires two servers and a shared storage solution (read: third server).
Reboot quickly and hope no one notices.
This isn’t the best idea, but it’s simple. If the server failed to come up or took more than a minute, it could be problamatic.
As with #2, but somehow save the state of the guests (can virtualization software — kvm, xen — do that?) so they can come back to where they were. This is almost a requirement for #2.
Maybe if I’m running something very stable like Debian stable or RHEL, the required downtime would be minimal — fewer kernel security updates required?
I’m cringing as I write this; surely there is a better way (that doesn’t require maintenance windows). I’ve used VPS’ with uptime measuring in the hundreds of days.
I think that the migration option is the best, but it’s only feasible on a regular basis if it’s:
- I have more than one or two servers
Enterprise-class virtualization solutions will almost always implement #1. If you must have 99.9% (three nines) or better, this is your best choice; if you need more nines than that, it’s pretty much your only choice.
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.