Up until now we’ve had FreePBX servers (primary and backup) on our LAN, connected to the VOIP gateways of our VOIP providers, and things have been fine.
Now we’re adding a second office in a different location, so we need to start thinking about colocating our FreePBX servers with our ISPs, rather than having them in a particular office. We can’t have a situation where both offices go down just because one office has a connectivity problem.
As I don’t really trust either of the two ISPs to run a very reliable colocation service (it’s a duopoly here), I would ideally like to have one server colocated with each ISP, and configure them with High Availability. I believe I can make it work from a networking standpoint: I would have VPN already set up before the FreePBX install such that both servers could see each other on the same network segment (the FreePBX instances would be virtualized).
But I have no idea about latency and bandwidth requirements for HA. I guess connectivity between the servers would be in the 40 - 60ms latency range, with 10-20 MBPS of bandwidth. Is this viable, or am I better off going with a Primary and Backup server (one on each ISP) and backing up two or three times a day?
You cannot HA over the public Internet and I’m not 100% the HA module supports multi-geographic locations due to how it handles monitoring. Preston’s suggestion of the warm spare is the best option for what you are looking to do.
That being said, you will have two PBXes with two IPs. You will need to have both IPs with your providers for inbound/bound calling. Your end users will need to be able to have “failover” accounts on their phones so they can register to the backup when needed. And since it’s a warm spare that means A backups to B and B restores that backup immediately so its the running configuration for B. So when A goes down and you get voicemail or other traffic on B, that is going to be outside the normal reporting and the voicemail storage that A would normally handle. Ergo, you will have voicemail (if any are left) on B during A’s downtime and will need to move it back to A (and hope it doesn’t overwrite any new voicemail with the same file name) so the users can have access to it when A comes back up.
Just some things to consider. Be aware, that’s not an exhaustive list.
This is a wish still.
OVH has vRack but it’s not available to use between datacenters yet. I would expect a beta version in the next few months. With vRack, you have a private network, like having a network switch connecting all your devices in a remote cloud environment together.
You’re confusing this. A private network is fine but you have to get your public IP between to two locations. You said you want to co-locate with each ISP. If that is the case you will need your own BGP space so each ISP can route your ASN and thus your IP can float between them.
In the case of the private network with vRack, that doesn’t guarantee that the connection between the DC is point to point and your traffic could have a few hops between it. Again, you will have to route your public IP between the two locations. vRack might offer you a private ASN within their network but that will also mean the two DCs are setup with BGP routes between them as well.
Generally colos will do private ASNs but in that DC only because while doing stuff on the private network side is good between state to state the DC in Texas has it own public space and the DC in New York will have it’s own public space because the two will have different backbone nodes it connects to and they’ll need their upstream to do all this for them as well.
I think the idea behind HA is to move the IP address that the telephones connect to between 2 servers. The services need 2 network cards. One for HA to communicate between, IE the vRack and your Internet as the floating IP address you change between the servers.
If you use VM’s on CentOs you have HA too. Very similar to the way FreePBX and the SBC use this feature.
vRack in an Open SDN project that will soon allow you to move IP addresses between data centers. I’m waiting for the Virginia center to open so I can test this. When I hosted with CyberLink they have no way to support this between Milwaukee and Phonix.
My solution is to keep all my VM raw. Make a VM copy once a day to a share location to all my servers. Use DNS to address my servers and set a low TTL.
I have tested many things, but nothing I like to upgrade across datacenters.
Many thanks for all the excellent replies! I was definitely oversimplifying the network implications in my mind - VPN wouldn’t be able to do what we need for HA to work, all the other latency / connectivity issues aside.
So while HA is a no-go, there are obviously many ways to skin the failover cat (primary / backup / warm spare etc.), so I’ll have to figure out what works best for us. I like the raw VM idea, where the whole image is just copied once or twice per day to a backup site, then we can manually can re-route as needed via DNS on the LAN.
We’ve used our backup server on our LAN a couple of times in the last few years (warm spare, just change the IP), and the biggest pain is that we lost all the action that happened during the time we were using the backup server when we switched back to the main server. I was very nervous about restoring the backup server back to the main server, so I took the data loss instead. Is it safe to restore from the backup server to the main server once the main server is up and running again?