I am following these instructions EXACTLY as presented:
And at the end of the partitioning step after the disks are written I am hitting this problem:
“No EFI partition was found” Go back to the menu and resume partitioning?
No clue what to do at this point.
Also these instructions do not cover making BOTH drives bootable if one fails.
This is a 400-UCS server.
I have been instructed by Sangoma support to Install Debian-12 the FreePBX17 then upgrade to PBXact.
Was also instructed to Install Software RAID per the instructions link above.
While waiting on support to respond to this, I am trying to figure out what needs to be done.
Having an answer worked out here would likely be useful.
NVMe is a known problem on the FPBX 17 ISO, but, this topic is not that problem because per your screen shot, it looks like the stock Debian 12 installer is in use. And that stock installer may struggle with RAID1 on NVMe, not sure.
Are you able to try with dual SATA instead during the installation ? Then connect the NVMe later (post-install) if you need additional storage space ?
Not easily able to try this at the moment..
This is a recently shipped to us 400 UCS “appliance”/server.
It shipped with PBXact/FreePBX 16 with dual NVMe software RAID1 configured.
We are trying to get it upgraded to V17 as it is hardwarewise.
Not sure if there is space/mounting for SATA drives.
We of course desire/expect it to work with the dual NVMe drives as it was shipped.
We also need to move forward on up to date software before putting it into real service.
Ok my EFF up here in a way.
We had the server open a couple weeks ago to install DAHDI hardware.
And I thought I saw two NVMe connectors on the board.
Didn’t give it much mind at the time but put it all back together racked it and all.
Never saw the two SATA drives because they are under a big plate that has 5-6 screws
That you have to remove then remove the plate and THEN you can see the drives and cables.
Grrrr…
Coupled with the small capacity drives (256GB) that came on the system (Sangoma 400 UCS)
I had been assuming these were NVme drives.
So my bad there sorta. Should have looked closer.
No now back to the same problem – and it’s not because it’s NVMe drives.
Those instructions are BUM as far as I can tell.
I have no clue why it instructs to setup LVM either…
Seems more complicated than it should be on top of it does not work.
But that’s also could just be my lack of LVM experience and immediate bias to avoid it.
I have never needed it.
Why does this setup with software RAID need it or instruct to do all of this crazy LVM setup.
Software RAID can do what software RAID does without LVM just fine no?
And apprantly they are totally missing setting up EFI partitons for this somehow.
Motherboard cant see or boot Linux RAID stuff.
I don’t see where they cover this at all in those instructions.
As mentioned: this is SATA not MVMe.. I was incorrect.
Still fails badly.
Any thoughts on this?
If I click continue the installation fails with Fatal Error and cannot continue.
Followed the linked instructions above perfectly.
I’m working on figuring out the software RAID and UEFI ESP partition duplicated on each drive
stuff myself..
So far so good.
It now boots from UEFI with /root on the MD device with no problem at all.
I think those instructions are absolute garbage and do not at all apply to our 400 UCS.
It looks like somebody did that and used LVM on a virtual machine to do it as their personal preference or familiarity area.. (you can see it in the screenshots).