Migrating 400+ D Series Digium Phones from "Digium Phone Module" to EPM

We are in the process of researching / preparing a migrating from FreePBX15 (distro) to FreePBX17. We have (excluding ATA devices) 400+ D Series digium Phones that have been configured / provisioned via the no longer supported / included “Digium Phone Module”. In researching what it will take to migrate to using EPM instead, I came across this question/answer in the documentation of EPM FAQ section.(see attached screen shot, URL https://sangomakb.atlassian.net/wiki/spaces/PG/pages/32669947/EPM-DPMA+for+Digium+Phones)

Is this still true??? There is no way to migrate over to EPM from the “Digium Phone Module” without have to “re-configuring the same extension freshly”?

I see there is an “import” option in EPM, if using that to provide the mac address, extension, etc. you will still need to re-configure the phone manually by touching it?

Our phones are scattered across 40 different locations across 5 different states. If we have to physically touch each phone to configure them that is going to be a major undertaking.

Lastly, would you recommend doing the migration to EPM while still on FreePBX15 before the migration to FreePBX17 or after?

There is a lot to process on that wiki page. It does seem geared toward v15. One initial general recommendation is to work things out in a testing setup on a separate network as much as possible using as close as you can get mirror of current systems, then in production start small eg. one phone. If that goes well for all your phones on v15, then install a new v17 on new hardware. However, not tried myself this particular path and YMMV.

Hello,
I am migrating from FreePBX 14 to 17, and have older Digium D50 and D70 phones to bring along with me.

The first thing I did is create a virtual machine on a separate network, and make a FreePBX 17 installation from scratch. I activated it, but did not purchase updates to my commercial modules, as I test the basics and make sure they work properly before moving onwards. I suggest that you do the same, but if you wish to use the same IP numbers as your live network, be sure that there is an air-gap so that you do not introduce duplicates on the live network. With careful virtual machine planning, you should be able to simulate all the local networks or VLANS.

Next, I took one of my backups from FreePBX 14 and restored them into FreePBX 17. This is going to be a messy step, as I am also going from CHAN_SIP to PJSIP with this update, and the restore scripts will translate the extensions into PJSIP protocols.

Depending on your backup/restore combination, you might bring along your trunks to the external world. We use SIPSTATION, and your will need to prevent a dual registration on those trunks. You can either make sure there is no route to the outside world to the trunks, or my solution is to define trunk1.freepbx.com and trunk2.freepbx.com to the IP 127.0.0.1 so that my test server doesn’t mingle with the real world trunk registrations. You will see errors in the logs about these registrations… ignore them. You don’t need SIPSTATION to test all your “inner” configurations below your INBOUND ROUTES.

In my tests so far on FreePBX 17, I have found:

  • PJSIP works with D70 and D50 phones ← my largest concern
  • My EPM templates came over 14 → 17, but if I remove certain phone types, I can get a whoops error. I need to figure out how to get past whoops
  • The dial plan seems intact. I have not performed exhaustive testing yet
  • Not related to FreePBX, but the older CentOS based ISO from Sangoma used to setup soft RAID dynamically. I’ll need to explore how Debian handles RAID. My ultimate destination will be a physical box, and that BIOS doesn’t support RAID, so looking at a soft RAID
  • Sangmoa S500 and S700 series I was able to go to the web and AutoProvision the phones to easily move them to the new box. I did not have Sangoma’s “one touch” setup configured where it hits the cloud looking for provision information.
  • Digium phones were not as easy - I had to touch the phone to get them to play nice. Then again, in the past, I was able to do things without DHCP Option 66; this time around I needed to work with Option 66 in order for the Digiums to latch in.
  • Digium has a special area to configure – EndPoint Manager → Global Settings and I had to enable DPMA and set that up

This effort, however, has my Sangoma and Digium phones talking to each other. My next steps are to clean up the templates, and then exhaustively test the Dial Plan. This work in the virtual world all builds confidence that the final physical machine setup will be easier and productive.

I would never recommend using the same hardware to upgrade a system. Phones are often Mission Critical, and it is a nice feeling to know that if the new server has issues, I can remove it and put the legacy one back in. I also agree that you should try and migrate your phones in small batches. This might require you to setup an IAX connection bridging the systems together.

Good hunting, and test test test.

Christian

1 Like

Thank you for your very detailed response. I do have a VM setup for testing FreePBX17, currently on a completely separate network. I haven’t done a “restore” to it yet, but have been just setting up test extensions, etc. to get a feel for how well or not well the phones and migration will work.

I have successfully gotten a D60 and a D50 provisioned on this new test server, but I did those as completely new extensions and configured them manually by factory resetting each phone and manually typing in the server IP address on them.

So it sounds like you were already using EPM for your digium phones on FreePBX 14? If that’s the case, did you ever do a migration from “Digium Phones Module” to EPM? Do you remember how you accomplished that? Just set them up like new extensions in EPM maybe?

Based on your response, it’s feeling like we are going to have to touch all of our phones across all of our 5 state foot print in order to migrate them over. Short of our phones no being supported at all, that’s pretty close to worst case scenario for us.

Do you want to boot off a soft RAID or do you want to boot the OS off something like a flash disk then mount a soft RAID and install the system on it?

If you want to boot off a soft RAID then Debian 12’s installer is not going to work for you. The best installer out there for doing this is Ubuntu Server’s installer. The character-mode Ubuntu Server installer allows you to configure your boot disk into a RAID mirror and in fact, you can even configure it into a mirror with a hot spare (assuming you have 3 identical size disks)

With older Ubuntu desktop (and this probably worked with Debian) there used to be a hack where you could boot the desktop GUI installer, format each disk EXT, then abort the install, then boot with a “boot-repair” USB stick

Boot-Repair - Community Help Wiki (ubuntu.com)

and go in and set the disks up into a mirrored config. Then during the GUI install you would drop to command line and load mdm and run some commands to insert it into grub and into the installed OS.

But in recent versions this hack no longer works and boot repair goes ballistic if you have an EFI boot BIOS and won’t allow you to update the partitions, it locks them.

The only Linux/Debian based installer now that I have found that sets up a bootable mirror properly now is the Ubuntu Server one.

Also I have NOT had any luck with dmraid anymore on the latest Debian stuff. That was for “raid BIOSes” that you could configure the RAID in BIOS. But the latest installers don’t seem to even probe for or support it anymore.

To clarify, software RAID for /boot/efi partition is a known problem in UEFI architecture, but there may be workarounds besides booting into legacy mode.

Use ZFS and RTFM

https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html

Well first of all its:

Debian Bookworm Root on ZFS — OpenZFS documentation

not the link you put up.

Secondly, that probably isn’t going to work on most machines out there with UEFI because it’s not enough to just create a UEFI partition with

sgdisk -n2:1M:+512M -t2:EF00 $DISK

the machine’s BIOS must be updated to see the UEFI partition. The Ubuntu installer does this, at least it’s done so on all of the machines I’ve run this on. I don’t see anything in the Debian instructions that does. It MAY work if you are willing to futz with it enough - but the litmus test of any of these setups is Can You Shut Down The System, Remove A Hard Disk, And Turn On The System And Have It Boot?

I will note that the Ubuntu mdm setup allows for a hot spare in the mirror. So, if you setup a hot spare, AND a disk fails but does NOT take out the SATA bus, in theory the mdm software will switch over to the hot spare disk and the server keeps running. I didn’t see anything like that in that zfs arrangement instructions.

There is also something to be said for having the “poor man’s raid mirror” boot setup INTEGRATED WITH THE LINUX INSTALLER so that all it takes it 5 minutes selecting the advanced disk partitioning, setting it up from the character-mode curses-based installer that you can just tab around to the different selections with, instead of that mess of instructions of forklifting ZFS into the install.

In reality, most of the time I’ve dealt with these kinds of “cheap-raid” systems 1 of 2 things happens:

  1. A disk fails, shorts internally on the power or SATA bus, and the system now won’t see EITHER disk. You then have to unplug each successive disk until you find the bad one. Although, USUALLY it’s disk 0 - because in these mirrors, while WRITES happen to both disks, READS generally only happen to the FIRST disk so that disk gets more use. Then the remainder disk has a copy of the system - but almost always, you are not going to have a duplicate of the hard disk available - so you have to boot the good disk. Either by changing it from SATA position 1 to 0 or changing BIOS to boot off the second disk.

  2. A disk fails with bad sectors or crashes and the mirror then unmirrors but the failing disk does not take out the SATA bus. If the server does NOT reboot you are OK. If it does, it may not come back up if the boot disk that’s failing is disk 1

ALL of these “cheap raid” setups really have to be regarded as the following:

This is a last-ditch, emergency cover-your-ass thing. 90% of the time the disk is buried in the server chassis, so you have to shut the server down to replace the failed disk. Even if it’s in a hot swap tray because it’s connected to the same SATA bus, a removal and replacement will throw electronic garbage on the SATA bus and the OS will panic.

The value of this setup is if at 3am a disk takes a dump and goes offline, the server won’t shut down. Obviously, you have to write some kind of notification script telling you that the mirror is having a problem, but as long as the server stays up running on the surviving disk - you can take your time, order 2 brand new disks, schedule downtime, then make backups and shut the server off and replace ALL disks - because by then, the disk that hasn’t died is very close to dying.

I used to do this with the FreeBSD ide/sata wd driver which had poor-man’s cheap-raid integrated into it very successfully. It works OK with the mdm and dmraid drivers, once more because the mirror function is separated from the filesystem driver. From the filesystem’s POV, it’s all just 1 big disk, the mdm driver handles the ickness of syncing 1 disk to the other. This is why EFI on that kind of a setup ISN’T a single point of failure because there’s only 1 disk - the logical mirror.

In general, a poor-man’s cheap-raid-mirror is really only usable in a low-cost setup like a lab or a small business that can afford a couple day’s downtime if the server dies. For a production system that needs to look at 9 9’s you really need to have a hardware RAID card in there so that if a disk fails you can eject and replace it and have the hardware RAID card rebuild everything without the operating system even aware of what’s going on.

Well we are way off OP’s questions but per ZFS RAID EFI “solution” I had to stop RTFM right here:

LUKS encrypts almost everything. The only unencrypted data is the bootloader, kernel, and initrd. The system cannot boot without the passphrase being entered at the console. Performance is good, but LUKS sits underneath ZFS, so if multiple disks (mirror or raidz topologies) are used, the data has to be encrypted once per disk.

Which is out-of-date for multiple reasons:

  1. the key material for LUKS can be stored on a thumb drive and read automatically for unattended boots - there’s multiple slots for different passwords - this has worked for at least ten (10) years with an unencrypted /boot and encrypted / root (and some scripting)
  2. since Debian 8 (Jessie) there’s enough tools in the GRUB box for FDE including /boot (not tried it myself tho): Full disk encryption, including /boot: Unlocking LUKS devices from GRUB

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.