Need to increase available disk space to FreePBX

I’m having severe disk space issues on my FreePBX server. I have a server with two 2TB drives in RAID1, but for some reason when FreePBX was installed, it partitioned the space for the PBX with only 18GB of space.

============================

[root@PBX /]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 18G 16G 1.5G 92% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/md0 283M 27M 242M 10% /boot

[root@PBX /]# fdisk -l | grep Disk
Disk /dev/sdb: 20.7 GB, 20724056064 bytes
Disk identifier: 0xc7d75ca9
Disk /dev/sda: 1979.1 GB, 1979120091136 bytes
Disk identifier: 0x566ba97d
Disk /dev/md2: 19.2 GB, 19200475136 bytes
Disk identifier: 0x00000000
Disk /dev/md1: 1072 MB, 1072693248 bytes
Disk identifier: 0x00000000
Disk /dev/md0: 314 MB, 314507264 bytes
Disk identifier: 0x00000000

=============================

As you can see, there is 1.9 TB of space on /dev/sda
I am almost completely Linux-illiterate. How do I allocate some of that space to my PBX?
If I can’t resize /dev/md2 is there a way to allocate that large space chunk to save all of my logs and call recordings?

Thanks
Mark F
CAKE Corp

Useful commands

cat /proc/partitions

cat /proc/mdstat

sfdisk -d /dev/sda
sfdisk -d /dev/sdb

man mkfs
man mdadm

Edit on e reread you only have a 20Gb drive on /dev/sdb so you will need to fix that first if you want the extra space to be raided

1 Like

[root@PBX ~]# cat /proc/partitions
major minor #blocks name

8 0 1932734464 sda
8 1 307200 sda1
8 2 1048576 sda2
8 3 1931377664 sda3
8 16 20238336 sdb
8 17 307200 sdb1
8 18 1048576 sdb2
8 19 18881536 sdb3
9 2 18750464 md2
9 1 1047552 md1
9 0 307136 md0
[root@PBX ~]# ca /proc/mdstat
-bash: ca: command not found
[root@PBX ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0] sdb1[1]
307136 blocks super 1.0 [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
1047552 blocks super 1.1 [2/2] [UU]

md2 : active raid1 sda3[0] sdb3[1]
18750464 blocks super 1.1 [2/2] [UU]
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices:
[root@PBX ~]# sfdisk -d /dev/sda

partition table of /dev/sda

unit: sectors

/dev/sda1 : start= 2048, size= 614400, Id=fd, bootable
/dev/sda2 : start= 616448, size= 2097152, Id=fd
/dev/sda3 : start= 2713600, size=3862755328, Id=fd
/dev/sda4 : start= 0, size= 0, Id= 0
[root@PBX ~]# sfdisk -d /dev/sdb

partition table of /dev/sdb

unit: sectors

/dev/sdb1 : start= 2048, size= 614400, Id=fd, bootable
/dev/sdb2 : start= 616448, size= 2097152, Id=fd
/dev/sdb3 : start= 2713600, size= 37763072, Id=fd
/dev/sdb4 : start= 0, size= 0, Id= 0

As i edited , you don’t have two 2Tb drives.

I see.
The machine itself is a Dell server which has (or is supposed to have) two 2TB drives in a hardware Raid 1 (mirrored). So as far as I know, to the OS It should only see one actual physical drive
Should this matter, or should Linux be showing the two physical devices anyway?

According to your outputs, your server is setup using linux software raid. So most likely your “hardware” motherboard raid1 was actually not configured properly before installation of Freepbx

Appears to be 2 JBOD disks, and centos detected them separately and created a software raid. As to why it only chose to partition 20GB for md2 (typically your / partition) In my experience, it always uses all available space after it creates your /boot partition (md0) and swap (md1)

Growing your md2 partition isnt all that simple. See here: https://raid.wiki.kernel.org/index.php/Growing

Truth is if you you use the dell hardware perc to build your disks from the beginning, you should have just made one raid1 disk out of them which would look like /dev/sda to linux, you apparently didn’t do that and built who knows what, there is no need for a software raid on top, there are arguments for either way but it is a little late to worry about that , I see you are a newbie, so first off I would suggest you “image” the system with mondoarchive or clonezilla
and practice restoring it to another piece of hardware/VM first. Then just redo your CtrlR dell boot perc formatting thing and restore resize the image partitions to suit without the software raid thingy.

http://www.thegeekstuff.com/2008/07/step-by-step-guide-to-configure-hardware-raid-on-dell-servers-with-screenshots/

another way that takes some chutzpah is:-

make sure you can boot from /dev/sdb as you boot, If you can’t make sure grub is properly installed on /dev/sdb, If you can then:-

I would detach the /dev/sda devices from the raid. first off “fail” them:-

mdadm --fail /dev/md0 /dev/sda1
mdadm --fail /dev/md1 /dev/sda2
mdadm --fail /dev/md2 /dev/sda3

a reboot here might be a good precaution.

then remove them

mdadm --remove /dev/md0 /dev/sda1
mdadm --remove /dev/md1 /dev/sda2
mdadm --remove /dev/md2 /dev/sda3

now you can repartition and reformat /dev/sda as a single 2Tb drive , perhaps with fdisk.

To remount /var (the bit which gets big :slight_smile: )

mkdir /mnt/newvar
mount /dev/sda1 /mnt/newvar
rsync -av --progress /var/ /mnt/newvar

that will take a while, then stop your running services

service stop mysql
service stop httpd
service stop asterisk

now do that but be aware your PBX is no longer working until you:-

rsync -av --progress /var/ /mnt/newvar

again, it will be much quicker. Then you can

mv /var /varold
mkdir /var

and edit /etc/fstab to mount /dev/sda on /var

last step reboot

If I got it right and you did too, then a reboot will hopefully fix you.

Thank you for the replies. There are physical limitations to my being able to fix this issue - It is in a sattelite office halfway across the country and I only have SSH access specifically, and as it’s a live system for all of our support department I can’t afford any downtime.

At this point, I’m most likely going to create a new server (and check my configuration more carefully now that I know what I’m looking for)

In the meantime, does anyone know if there’s a way for me to mount that unused disk space for use in some way (like as just empty, available storage space) and then point FreePBX to save all call recordings (and hopefully logs) there?

Unfortunately it is all used , inefficiently though, the partitions:-

/dev/sda3 : start= 2713600, size=3862755328, Id=fd

/dev/sdb3 : start= 2713600, size= 37763072, Id=fd

the software raid:-

md2 : active raid1 sda3[0] sdb3[1]
18750464 blocks super 1.1 [2/2] [UU]
bitmap: 1/1 pages [4KB], 65536KB chunk

One way would be to fail and remove /dev/sda3 from /dev/md2 , re-partition it with /dev/sda3=37763072 and type fd , /dev/sda4 with the rest of the space with type 83, add /dev/sda3 back into /dev/md2, format /dev/sda4 as ext4 and then you can do the rsync/fsstat/reboot thingy and hope the raid doesn’t fail.

If you get it wrong, (or I typo’d) then you will have to jump in your car :wink:

good luck.

Hi Dicko, I hope you are still around.
Looking to find some help and seems like you know a lot, so here I am bothering with this.
BTW not to literate in linux here…

I have a freepbx hosted in vultr.
I can acces GUI and via SSH
My hosting had 40 Gb and at one point got short. I added 20 Gb via the Vultr GUI.
Vultr reports 60 Gb. Freepbx Gui reports 40.

I don’t know how to add the new extra space to the pbx.

Here are some data that might be usefull…


Filesystem Size Used Avail Use% Mounted on
/dev/vda2 39G 29G 8.3G 78% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
/dev/vda1 283M 30M 238M 12% /boot



parted print:

Number Start End Size Type File system Flags
1 1049kB 316MB 315MB primary ext4 boot
2 316MB 42.1GB 41.8GB primary ext4
3 42.1GB 42.9GB 805MB primary linux-swap(v1)


I hope this helps

Best

Ceka

basically its a “cant do that safely on a mounted fs”

An easy gui way (but lengthy way), is to add a custom os in vultr

https://downloads.sourceforge.net/gparted/gparted-live-0.30.0-1-i686.iso

mount that custom iso into your machine and “do that thing”, when done, detach the custom iso .

1 Like

∆Truth

Vultr is pretty great, as it lets you upload an ISO to boot from. Once booted into gparted to resize your partition, you will have to move your “swap partition” to the end. In order to expand your primary.

Hi there, and thanks for your responses.
I manage to upload the iso but never able to boot from. Don’t know the why…

Is there any more generic linux version with a GUI and with the gparted application that you can recommend?

(Bare in mind I’m a novice)

Thanks

Which bit of

basically its a “cant do that safely on a mounted fs”

An easy gui way (but lengthy way), is to add a custom os in vultr

https://downloads.sourceforge.net/gparted/gparted-live-0.30.0-1-i686.iso

mount that custom iso into your machine and “do that thing”, when done, detach the custom iso .

didn’t work?

it should have an md5 of 85a10f7104c707b33a7b6add97d1bfd2 on your vultr “iso” page, just reboot your VM with that iso mounted from the “custom iso” -> “Attach and reboot” page, when it boots it’s time to stop being a noobie , i.e. RTFM for gparted, but it really is pretty obvious. It’s basically choosing the resize choice on the “short” partition

Gparted live is about as generic as you can get because that is ALL it has on it, but I believe a far less generic image that also has it is ArchLinux , in the Vultr library.

I have the same problem as carloskloos. I tried your suggestion dicko - resize in gparted was successful, but when I rebooted after detaching the custom ISO, SangomaVG-root is still only showing up as 36.8 Gb in CentOS, not 55. I re-attached the gparted iso, rebooted in to gparted and the volume is still reported as 55Gb there.

Any ideas?

For LVM see

Thanks Dicko.

I ran scenario 2, but I get a the below error when I run resize2fs /dev/SangomaVG/root

Bad magic number in super-block while trying to open /dev/Sangoma/root
Couldn’t find valid filesystem superblock

Be careful, make backups if needed but one of

should get you there.

Turns out xfs_growfs was the command I needed instead of resize2fs. I’ll post a step by step tomorrow. Thanks dicko!

Here’s how I increased my drive size on Vultr. Note I’m running a LVM drive. Try these steps at your own risk.

  1. Increase drive size on Vultr (best to create a snapshot first)
  2. SSH in to PBX and run vgdisplay to get the volume group name (SangomaVG in my case)
  3. Run command lvdisplay /dev/SangomaVG/root to get the logical volume’s size
  4. Run command lvextend -L +[X]G /dev/SangomaVG/root (where X is the number of Gigs you want to increase the volume by) or lvextend -L [Y]M /dev/SangomaVG/root (where X is the number of Megs you want to increase the volume by). I’ll leave it to you to calculate how much drive space you want to add.
  5. Run command lvdisplay /dev/SangomaVG/root to confirm the increase
  6. Run command xfs_growfs /dev/SangomaVG/root to resize the volume
  7. Run command fdisk -l to confirm the changes
2 Likes