I have a Sangoma OpenBox FreePBX 500 appliance purchased in 2016. It’s my understanding that it’s running RAID 1 for mirrored HHDs. My question is how can I check on the health of those HDDs? If one of them failed would I get a notice on the FreePBX Dashboard or would I simply not know until the second one also failed and I was left with a non-working system? Is there a command line I can run to check?
cat /proc/mdstat to start.
Thank you. Looks like things are good so far.
Personalities : [raid1] md0 : active raid1 sda1 sdb1 307136 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb2 sda2 1047552 blocks super 1.1 [2/2] [UU] md2 : active raid1 sdb3 sda3 311083008 blocks super 1.1 [2/2] [UU] bitmap: 3/3 pages [12KB], 65536KB chunk
To make sure that both disks have GRUB installed
Is non disruptive.
I see on the instructions that I need to specify the location to install GRUB. They list some examples but I can’t help but notice that none of their examples have a md0 drive structure. They list SDA and HD0.
Am I correct that I’d use the command # grub-install /dev/md0 in the case of my FreePBX system? And then again on the second disk # grub-install /dev/md1? Obviously my Linux is very weak and I sure don’t want to screw up my system. I’ll probably wait and work on this next week as this is a busy weekend for me and I don’t want to screw anything up this late on a Friday.
I’ve never done this myself, but this thread seems consistent with my expectation that the GRUB booting environment knows nothing about software RAID: linux - How to correctly install GRUB on a soft RAID 1? - Unix & Linux Stack Exchange
OK, so based on that article it looks like I would run # grub-install /dev/sda1 and # grub-install /dev/sda2 I see there’s an sda3 listed as well but I’m not sure what that is since I’ve never opened the appliance. Would I need to install GRUB on that as well?
Well , lets see a bit more first . .
#what are we working with? cat /proc/partitions # are we using efi? df # how are things psrtitioned ? fdisk -l /dev/sdx fdisk -l /dev/mdx
Had to stop at the office to pick up some things and saw your post. Thank you. Here are the results.
[[email protected] ~]# cat /proc/mdstat to start. Personalities : [raid1] md0 : active raid1 sda1 sdb1 307136 blocks super 1.0 [2/2] [UU] md1 : active raid1 sdb2 sda2 1047552 blocks super 1.1 [2/2] [UU] md2 : active raid1 sdb3 sda3 311083008 blocks super 1.1 [2/2] [UU] bitmap: 3/3 pages [12KB], 65536KB chunk unused devices: <none> cat: to: No such file or directory cat: start.: No such file or directory [[email protected] ~]# cat /proc/partitions major minor #blocks name 8 16 312571224 sdb 8 17 307200 sdb1 8 18 1048576 sdb2 8 19 311214080 sdb3 8 0 312571224 sda 8 1 307200 sda1 8 2 1048576 sda2 8 3 311214080 sda3 9 2 311083008 md2 9 1 1047552 md1 9 0 307136 md0 [[email protected] ~]# df Filesystem 1K-blocks Used Available Use% Mounted on devtmpfs 8067424 0 8067424 0% /dev tmpfs 8076024 4 8076020 1% /dev/shm tmpfs 8076024 852608 7223416 11% /run tmpfs 8076024 0 8076024 0% /sys/fs/cgroup /dev/md2 306069712 98829816 191669364 35% / /dev/md0 289229 175179 94598 65% /boot [[email protected] ~]# fdisk -l /dev/sdx fdisk: cannot open /dev/sdx: No such file or directory [[email protected] ~]# fdisk -l /dev/mdx fdisk: cannot open /dev/mdx: No such file or directory
cat /dev/mdstat to start , to and start were not encloded by back-ticks , the x in /dev/[ms]dx is a place holder tor items in
fdisk -l /dev/sda fdisk -l /dev/sdb fdisk -l /dev/md0 # not needed now fdiisk -l /dev/md2 # not needed now
in your case, add
is a more knowledgable source than I
I try to learn more of it every day but I use Linux so infrequently that I forget most of what I learn before I need it again. I’m not sure if that’s a blessing or a curse. It’s good that I don’t have too many problems.
I’ll read over with wiki link today. It’s describing the exact thing I’m trying to avoid, a disaster because I wasn’t checking the health of the drives.
[[email protected] ~]# cat /etc/mdadm.conf MAILFROM [email protected] # mdadm.conf written out by anaconda MAILADDR root AUTO +imsm +1.x -all ARRAY /dev/md0 level=raid1 num-devices=2 UUID=190c0886:45c79b1f:94454deb:70874817 ARRAY /dev/md1 level=raid1 num-devices=2 UUID=dd210b54:b9ea35f5:11407d10:cf75df2e ARRAY /dev/md2 level=raid1 num-devices=2 UUID=d6dfe04c:3a1cc093:8f1396c2:680b666f
Emails are sent to the root account, you can change that to a real emai address either there or in /etc/aliases
It actually is a real account. I changed the text before posting here.
Installing smarttools can give you a deeper audit of the drives
I’m satisfied that the drives are working currently. I’ve made myself a reminder to run mdstat so I can check on them. As my system ages that becomes more important than ever.
My next task is to get a good backup of my system but I think I’ll probably need to hire someone to help me with that. I have a backup set but I’m not sure I did it correctly and we all know how disappointing it is to try to do a restore only to find the backup wasn’t valid.
Do you have someone to suggest or should I put in a ticket with Sangoma and have them peek at it?
You would need a ‘bare metal’ recovery procedure. If you can put up with a couple of hours if downtime, I would suggest clonezilla, if you are 24/7 then mondoarchive (as I do for hardware based systems)
I’ll take a look at those options. Would you say it’s better to have a bare metal backup that way than just using the backup module and having a Distro disc handy? I guess a bare metal backup would allow me to restore to a VM and not need to wait for hardware. Thank you.
With mondo I stick a usb thumbdrive in and have a cronjob that runs mondoarchive every sunday morning.
You are a boot choice away from a ‘nuke’ restore.
LOL… I l feel like I need a backup before I make a backup. I see I’m supposed to install Mindi and Mondo and then make up the backup with about 60 options. That’s pretty intimidating when there’s no backup in case I screw it up.
I’ll backup to a USB drive. That seems the easiest to rewrite and restore from.
Not really 60, just the size in MB’s and it’s ‘location’ (/dev/sdc or whatever) for a cronjob, when it’s working, no tui for the cronjob