Freepbx 14 running out of Diskspace on Vultr instance

Hi
My instance has been running low on diskspace for a few weeks now and having frightened myself googling how to increase it i have been using filezilla to go into /var/log/asterisk and deleting the older logs. This usually got me down to just below 70% and it might be a week before i got the warning emails at 75% - then filezilla again… However, today, Sunday, i took the bull by the horns and using a variety of sources but mainly using " How to extend a partition with unallocated space CentOS 7" on Webcore cloud, i managed it without even shutting down the server - so i thought i would share this with you, most is copied and pasted but some of the partitions are slightly different (if you know what you are doing, unlike me, then you maybe don’t need my full step by step by step) Here goes:

  1. Take an image or a backup - you have been warned - if this fails you can go back.
  2. Upgrade vultr instance to a bigger disk using their control panel.
  3. Extend partition

Copy

fdisk /dev/vda

Enter p to print your initial partition table.

Enter d (delete) followed by 2 to delete the existing partition definition (partition 1 is usually /boot and partition 2 is usually the root partition).

Enter n (new) followed by p (primary) followed by 2 to re-create partition number 2 and enter to accept the start block and enter again to accept the end block which is defaulted to the end of the disk.

Enter t (type) then 2 then 8e to change the new partition type to " Linux LVM ".

Enter p to print your new partition table and make sure the start block matches what was in the initial partition table printed above.

Enter w to write the partition table to disk. You will see an error about device or resource busy which you can ignore.

Update kernel in-memory partition table

After changing your partition table, run the following command to update the kernel in-memory partition table:

Copy

partx -u /dev/vda

Resize physical volume

Use this command to resize the PV to recognize the extra space

Copy

pvresize /dev/vda2

Resize LV and filesystem
lvextend -l +100%FREE /dev/mapper/SangomaVG-root

Then you need to use xfs_growfs command to to resize your partition, so use:
xfs_growfs /dev/mapper/SangomaVG-root

That should be it - hope i helped somebody.

3 Likes

By default, FreePBX uses logrotate and keeps only one week of logs. Unless you’re running a call center, these shouldn’t be huge. If yours are, take a look at why – you may have a configuration or security issue.

Hi Stewart
I have over 100 users and i don’t think the logs were that old, i probably started with too small disk as it was 3 years ago and it was a bit of a bench testing PABX on day1. It just grew!

No, I have more than one system in a $5 Vultr instance with 100+ users and zero issues with disk space because of the logs.

Well, there are lots of logs generated by your system, some are very noisy :wink:

ls  -lsrth /var/log/*/*|sort -h

would give you quick peak of how big the logs are (biggest at the bottom and all will be dated), If logrotate is not tuned for your system then you might well run out of space. But you can adjust rotation in /var/logrotate.d/* the number of days/rotations/size etc. to suit.

If you have very large files at the bottom of the list you might want to examine them for content (fail2ban will watch ‘security’ from asterisk which WILL be big with 100 seats, but it’s not really useful to keep a weeks worth).

Hi Thanks for your comments. I am not an expert on this but it seems to be mainly fail2ban logs. I initially started with a 25Gb instance and the /dev/mapper/SangomaVG-root was 22Gb of this, as my system grew i upgraded to a 60Gb instance (but did not expand the /dev/mapper/SangomaVG-root ) then an 80Gb instance. I think the main problem was the Email warning at 75%, although it never got above 80% i was getting loads of Emails (obviously). Now i have taken the steps above my /dev/mapper/SangomaVG-root is 77Gb with 21% used, so i will monitor it to see how big it gets.

BTW logrotate is set at 30 days.

3 days in we are at 22%

15 days in we are at 25% without deleting any logs.

47 days in we are at 26% without deleting any logs.
BTW i do have some heavy users on the switch.

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.