FreePBX 14 not booting

Sorvani, I tried to restore from backup. This is what I got: “Invalid backup for or undefined error”. Frustrating that I’ve been backup regularly (setup auto storage to a freeNAS share every week) and the system just fails me. Therefore, I hope the Dicko’s suggestion works by resyncing the raid1.

Systems like this should be set and forget.

UPDATE: I fixed this problem by:

chmod 777 all backup files and it worked. Now let’s see if the restore is successful.

Disaster recovery can never be set and forget.

An untested recovery process is worse than no recovery process. Because it gives people a false sense of security.

Sorvani,

Do you have a suggestion besides setting up a mirror/spare to test on a (how) regular interval?

Dick, I’m sorry if I pissed you off. I’m a logical person so if I broke it so it is. No one touched beside it me. I suggest that you not try to read too much into it as I’m not dumber than you.

I agree , “dumb is as dumb does”, and in my life I have probably done more dumb things than most others, but that is likely because I’m older than most :wink:

1 Like

Thank you for your steps. Followed your steps but they didn’t work? for each of the --add commands, I get "mdadm: stat failed for /dev/sdb3: no such file or directory.

Sounds good wiser and older:grinning:! Cheers.

I was able to restore and all the files were there and into a fresh install as per your suggestion. What I would still love to have is to get my old RAID1 configuration back though. But the backup and restore has worked. The only thing I had to do was to use the Wizard to make the dealing patterns.

Then I guess your

fsdisk -d /dev/sda|sfdisk /dev/sdb

didn’t work either, if it had and /dev/sdb was functional, they likely would have.

Since I’ve confirmed that the backup works, the best way forward would be follow Sorvani’s advise. I will rebuild a new server with a new RAID1, then restore a backup. That will solve everything.

Lessons:

  1. FreePBX isn’t like FreeNAS in the sense that the order of the drive matters.
  2. backup in V14 work vs V13
  3. RAID1 arrays may get corrupted even though a system cared for and regular checks needed

Thanks Dicko and Sorvani for sharing your teachings.

All three of your lessons are the wrong take aways from this experience.

Additionally, redoing exactly what just failed will not “solve everything”

What would you suggest Sovani I do different???
By that I mean that the system worked well before the shutdown. I suspect that the RAID array had been having issues. Yet, it’s really weird that when I ran cat /proc/mdstat, it didn’t show the Raid1 was degraded (see prior screenshot/picture) because sdb’s partition was clearly gone!!!

This isn’t intended to sound snobbish or condescending, but after 30 years of managing servers I don’t think I’ve ever seen a case where saving money by using RAID on anything other than a “real” RAID controller (no software assist) wound up saving money in the long run, unless your time isn’t worth anything. There are plenty of inexpensive … not great, but serviceable SATA RAID cards that work well with stock Linux kernels. Avago/LSI has one (9266-4i) that’s about $130 street price that supports 4 SATA/SAS drives, etc. You’ve probably already spent more than $130 worth of time messing with this.

So anyway, my $.02 is use hardware raid for anything that’s not strictly a test implementation. Software RAID tends to be a mess unless you really know how the OS deals with it, and even then it still consumes enough time to where going hardware makes sense.

1 Like

ecarlseen,

I learn a lot of things on this forum. I am not a professional sysadmin. Just a weekend warrior who likes to learn and tinker with advanced server software.

I have heard that software RAID can be hard to repair and can take 1-2 hours to repair by a truly professional. I agree that in that time frame, one could of saved $300+ of consulting fee by not using software RAID at all. But who would have spare parts lying around for emergencies as insurance? If one doesn’t, the server(s) may be down for days, which could be more expensive if it were a production system, right?

Thanks for sharing. I learned a lot.

okynnor1,

No problem! We all started out somewhere, and these forums are great places for those of us who have been around the block a few times to try to pay it forward for all the advice we were given back in the day.

Honestly, when I started that post I was thinking that “real” RAID cards were still starting at around $400 - even then they made sense. I did a quick check and found that Avago/LSI (now Broadcom - that company has changed hands a few times) model for $130 and did a double-take and then carefully checked the specs. It seems to be legit… low-end, but legit. And from a premium brand. To me $130 is a no-brainer. Buy a spare if you feel the need.

Do watch out for the “cheeze-tech” brands like Highpoint and some of the others that are $30-$100. These are basically the same software nonsense that’s on the motherboard, baked into a PCI card. They don’t really add any value. $130 for a legit Avago/LSI/Broadcom card is an amazing price point, at least to a grey-beard like me.

It’s pretty straightfoward to use a spare drive and migrate your existing installation to hardware RAID if you’re so inclined.

This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.