FreePBX installation stuck

Hi,
I downloaded SNG7-PBX16-64bit-2302-1, burned the ISO image to USB stick. I have two bare metal computers. PC-1 has a SATA SSD hard drive. PC-2 has a M.2 SSD. I have successfully installed FPBX on PC-1. However PC-2 is stuck on the first blue graphical screen with the Sangoma Copywrite 2022 Sangoma Technologies All rights reserved. It has been stuck on that screen for 30+ minutes.
My best guess is the installer can not find the M.2 SSD.

Any suggestions about how to proceed?

Thank you,

M.2 is just a physical form factor. M.2 devices are commonly PCIe or SATA. PCIe devices may be NVMe. See

If you can’t see the SSD in the “BIOS”, you are out of luck with this drive, unless you can find a BIOS update that supports it. Note that many low end small PCs such as Beelink support only SATA in their M.2 slot.

If the drive does show up in the BIOS, try the non-graphical install for FreePBX and see whether it shows in lspci. If not, you might check whether Debian 12 sees the drive and if so, consider installing FreePBX 17.

Stewart,
I did not understand until your suggestion, that Debian12/FPBX17 was a separate installation. I downloaded/installed Debian 12 on the PC-2 hardware with no problem. I then followed the FPBX 17 instructions and successfully installed 17. I am able to get to FPBX GUI to continue the installation.
Thank you,

More info about the attempted PBX16 installation. I let the installer run for over an hour while it seemed to be stuck on the Samgoma screen. When I rebooted PC-2, I pressed F12 to select a device to boot from and the M.2 NVMe drive said it had CentOS installed. I selected the NVMe to boot from, it stopped at grub. That leads me to believe that the problem was video driver related. I don’t think the installer had a compatible display port/VGA video driver. The installer was trying to do it’s job but I couldn’t see any questions it may have been asking.
Just FYI

It’s probably NOT a great idea to install on a SSD due to the large number of writes that happen with voicemails being saved to a phone system.

Yeah I am sure a ton of people will say they done it

I just know that after 3 years every single Untangle router system I’ve installed that came straight from the factory with a SATA SSD in them had blown up. Replacing the SSD with a actual SATA drive - well many of those are sitll in operation a decade later.

Voicemails are insignificant on most systems. Some FreePBX systems, IMO not configured properly, generate huge log files. There are posts on this board complaining of logs filling the device and causing a crash.

In any case, reputable SSD vendors (Samsung, Sandisk, Crucial, etc.) have a reliable TBW (terabytes written) rating for each device. Fewer than 5% (probably less than 1%) will fail before the TBW limit is reached. smartctl and other tools can check how much your system has actually written, so you can estimate whether SSD wearout would be a problem in the expected lifetime of the system. Sorry, I know nothing about Untangle or how much it writes to disk.

I’ve read all of the technical specs. All I can report is what I have seen. The reality I have seen is that approximately 75-80% of higher quality hard drives pressed into use as servers (typical Western Digital Black or other 7200rpm 3.5in disk) if installed in a chassis with adequate airflow over the disk drive, will VASTLY outlast their warranty period. I have 15 year old desktop drives used as server drives for example that have no ill effects.

“standard” lower quality 3.5 drives, the 5400rpm stuff, used in cases with inadequate airflow - typically a 50% failure rate within 3 years of end of warranty.

Don’t even speak of the 2.5" laptop mag medial drives. You are lucky to make warranty with those even if they are in cases with plenty of airflow. However, most hardware gear designed to use those drives does NOT have adequate airflow cooling.

And my experience with SSDs is that they CLOSELY track their manufacturer’s warranty period. Computer gear makers have gotten very very good at engineering cheaper product to fail right after warranty and not everyone has $500 per drive for enterprise-quality drives.

If you understand the construction of storage media you will understand why this is. Mag media is subject to increased wear at higher temps so when the makers design it to fail right after the warranty period, they have to design for the max temp the drive is certified at. Which means you can get an enormously higher amount of life if you run the drive cool.

SSD on the other hand is not as affected by heat so it can be designed to fail right after warranty expires at room temp.

But I’m not a drive authority I’m just reporting what I’ve seen over 30 years of professional computer management. You do you.

Ted,
Just some comments to fuel the SSD vs rotating media conversation. I take the position that with the current SSD technology, failures are less than rotating media failures.
We only have one user with VM enabled, the main receptionist. Due to the nature of the business, callers seldom leave VM. Some other things would be backups and FPBX log files. Backups are directed to an FTP server (which has rotating media). Log files could be a concern as they are only kept for X number of days, deleted and then the SSD space would eventually be reused.
I have other Linux installations (not related to PBX), which the pc has a SSD and rotating HDD. The /boot /root and other OS related directories are on SSD. The /home /var and other app related directories are on HDD. Maybe a split system can use the best from both worlds.
Thank you,

So do I notably a few Raspberry Pis around. But the OS is specifically designed not to swap. On those systems I always use an external hard disk for anything that writes out log files.

SSD and EEPROMs have been around many years used in embedded systems I’m sure every Microwave oven in the country has one not to mention the millions of automobiles out there. But those are all purpose-built installations which do the utmost to avoid frequent writes to the media. A general purpose Windows or Linux OS does not.

Of course, I extensively use SSDs in laptops (m.2 chips) like everyone else and run Windows on them like everyone else. But I’ve “fixed” more than a few laptops with lockup issues and other weird issue simply by reformatting them and reinstalling windows which, of course, issues writes to all sectors - which allow the internal sector remapping logic to work. But if you lose a sector on a read - you get into a deadly embrace where the OS will never issue a write on a bad sector it has read and the ssd internal sector remap will never reallocate the sector until it gets a write.

I just won’t do this for a system that has multiple users dependent on it. For those, raid on mag media is king in my book. It’s just not worth the trouble and cost to deal with SSD and restructuring the filesystem to do a hybrid, even though I know may go this route.

Been using SSDs for many years now. Never had a problem. If you have had lots of problems maybe the equipment used really crappy SSD drives or something. With wear leveling, their lifespan is very predictable in terms of total data written, which you can monitor in the SMART drive stats.

Maybe you don’t understand what I am saying?

“I’ve read all of the technical specs. All I can report is what I have seen.”

I can create pretty reports on how wear leveling is supposed to work. But those do not match up with real life in my experience. I don’t select parts for my servers based on what air the parts vendor wants to blow. I select parts based on what I’ve seen work in the field and that is what I recommend that other people do as well.

But like I said, anyone is free to do whatever they want. All PBXes generate disk activity for call detail records, logs and so on. If you have built multiple FreePBXes on SSDs that have hundreds of extensions on them and that have been in service for years without SSD failures, then by all means keep doing it. Of course it would sure be nice of you to actually state real part numbers of these magical storage devices that I’ve never seen do this in real life, but whatever. I’ve already stated what I use - Redundant Arrays of Inexpensive (desktop) Drives. The go-to these days in that market is the Western Digital Red and the Seagate Iron Wolf NAS drive as everyone knows. And the secondary market is chock full of used LSI HARDWARE raid cards since Dell used to ship those as standard hardware.

Note that if you are really dead set on using SSDs you definitely shouldn’t be using Ext4. You should be using F2FS. The process of installing Debian 12 on f2fs is pretty easy, just run the Debian installer and let it create an ext4 root partition on the disk. Before the installer starts copying system files to the partition, manually format the root partition with F2FS by invoking the mkfs.f2fs command from the installer’s shell. Update the root partition information in the fstab file. Then resume the installation and continue as usual. Samsung developed f2fs specifically for flash based Linux installs and it will extend the life of an SSD

Just isn’t what the most of rest of us are seeing, wear leveling happens in the firmware of the device’s controller interface, way before either BIOS or OS ever need to access the block device, sd cards on PI’s will age, m.2’s on PI’s , not so much. NVME’s are more efficient than PCIE 's because of that firmware and commonly local cache dram, the m.2 interface is identical.

Maybe convert your ‘anecdotal evidence’ (otherwise just hear-say ?) to a set of reproducible facts and start a class action?

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.