To vm or not?

Hi guys,

I’m currently looking at rolling out FreePBX it’s seems like a great system from what I’ve managed to do so far. I’ve got a a new ish low powered rig with a 30 gig ssd. Uses about 30 watts an hour so is perfect for it. Now do I run this within xenserver/something else like that and virtualise it, or do I just run it on the rig directly and have it backup through FreePBX. Also once set up can I more or less leave it to it?


For the sake of simplicity, I’d load it on bare metal. Adding virtualization can introduce problems (like memory usage, hard drive issues, and Ethernet port oddness) that can make the system seem unstable.

Having said that, there are advantages in using VMs. You can run a “hot spare” of your VM in some virtualization systems, as well as allowing you to utilize your resources in a manageable way if you are already using virtualization.

Personally (with biases exposed) think that virtualization is an unnecessary level of complication that seldom provides real advantage. Most of the advantages people cite are subjective, unrelated to objective data to the contrary. If, on the other hand, you want to virtualize, knock yourself out.

A better solution (if you are intent on virtualization) would be to go with an installation “in the cloud” (pronounced “on other people hardware”) like Sangoma’s hosted solution. This way, you get the disadvantages of virtualization without the headaches of having control over them.

There are near zero reasons for any workload to not be virtualized today. PBX or otherwise.

If you have to ask the question, the answer is, it should be virtualized.

Places with requirements that actually mean vritualization cannot happen for technical reasons do not need to ask the question. They have the technical knowledge already.

IMHO, why visualize a single system? If you are going to run a hot spare and other systems on this single piece of server hardware then virtualization can be the way to go.

Keep It Simple.

Well based on the comments of “low power-ish”, the fact no CPU or RAM details were given and unless there was a typo a 30GB SSD HD. Now while I’m a 100% on the VM train, why is anyone even saying “Yeah, VM it” with this setup? What kind of VM host is this going to be?

A 30GB SSD as the HD is probably going to end up with a 2-4 Core CPU and what 4-8GB of RAM? Come on, that isn’t the resources for a VM host. Those are resources you would give this VM instance on a VM host.

@baldog You best option is to make this bare metal because this system, based on information so far, isn’t worth making a VM host as most of the needed resources would be applied to the host itself and you would have a pretty thin resourced VM instance for the PBX.

I’m mean the standard OSes take a few GB of space to begin with. Once the host OS and the single VM Guest OS is installed, you’re leaving almost no HD space for anything of substance to be done.

To Virtualize or not to virtualize… that is the question.
Whether 'tis nobler in the mind to suffer
the slings and arrows of virtualization
Or to take arms against a sea of troubles…
(Sorry… got carried away)
Here are things to consider in answer to your question:

  1. How are you connecting? I know that in our situation (with a Twinstar box), virtualization wouldn’t be possible because our PRI connects into a TwinStar, which then connects to two physical boxes for automatic failover.
  2. Are you virtualizing already? If you’re thinking of building a complete server just to virtualize one box, that would be a waste of money and resources.
  3. Networking: I guess if I were using SIP trunks for my external connection and wanted to virtualize, I’d make sure that my VM had a dedicated physical network adapter for the SIP trunks, just so there wasn’t an issue with network traffic and load.

The vm idea was if we went down that route we’d place it on our windows server. I do lean towards running it as is on bare metal and I think that may be the way to go for now.

Thanks for the replies you’ve helped make my mind up.

Why would you give up all the benefits of virtualization just to make life harder? Now you have no way to back things up easily, or restore easily, or modify the specs easily, etc.

I was serious when I stated near zero reasons not to. Going physical when you have an existing virtualization platform in place is actively sabotaging your infrastructure.

2 vCPU and some RAM based on your needs and you are done.


1 Like

We have a heck of a VM environment here…
3 hosts, 1TB of RAM each… 48 processors each…
That being said, still no clean and easy way to interface with a PRI that I’m aware of in a VM.

If you have any information or links on how to do this, I’d love the potential of vmotion and HA that VMWare offers me.

Oh… and our phone network is a completely different physical network from our computer network… so I would have to take that into consideration as well… although our VM hosts have 8 NICs…

Just install Dahdi on the host and the vm’s and use ‘dynamic ethernet’ (TDMoE) between them

1 Like

But I have a physical PRI coming in… I know that there are PRI cards, but that doesn’t work in a VM environment…

I’m just trying to visualize the whole connection… unless there is a TDMoE to PRI box somewhere?

And how do you think that would work with my virtual modems?

It looks like maybe something like this would work?

Sangoma has a line of Vega gateways that can interface with PRI as well as most other PSTN circuits.


Incorrect, TDMoE has always been ‘baked into’ DAHDI, and workd very well in virtualization, environments install as many primary cards as you want on the host, set up level 2 ( Ethernet spans ) .
On the vm’s install Dahdi just match the Mac address of the host and the span number. That’s all that is needed, treat the pri’s on the vm’s exactly as you would if you had a hardware device on it’s pci bud.

If very busy you might want to add separate virtual networks to isolate the layer 2 traffic.


As an additional advantage, if you use Sangoma PRI’s you can set them into "high impedance ’ enabling parallel connections to the Telco and thus have redindancy, if the host hardware fails, the backup host can pickup the traffic , think corosync here.

1 Like


I need to talk to you. lol.

OK, not “need”, not at this moment, we still have a couple of years of support left on our Xorcom system… but I’m liking the thought of moving my PBX to our VM hosts… with the VMotion that’s in place, the fact that it would be snapshotted and backed up, it’s becoming something that I’d really like having.

My 2 node solution was

2 x BFMachines
8 port Sangomas PRI cards
ProxMox as the virtualization environment
Glusterfs over zfs as the shared storage devices
Lots of PBIs as VM’s
(Kamailio cluster for SIP trunks)

It worked fine for years, but for the sake of far less headaches, I moved everything to the cloud, there was a changeover period where the host machines with their PRI’s acted as SIP gateways to keep the unported nubers active.

I don’t believe you can even get PRI’s over copper any more in my locale.

1 Like

I’m just curious, but what is the problem with your PRI on a VM?
I run 100% of my systems on VM (ESXi), including Asterisk, including with Digium cards.

It is sweet. Makes the overall system much more reliable and less prone to hardware failure, easier to maintain and upgrade, countless good points.

They were very nice, but unfortunately RedFone are no longer . .

How do you handle VMotion? Or are you not doing that?

PCI passthrough is a variable feast and very dependant on both vm implementation and physical hardware, Dahdi pass through always works without problem as it is just layer two traffic.