Storage upgrade issues

Hey!

I upgraded the VPS that we are on to give us 100 GB of storage, vs. the 60 we were at previously. After upgrading the VPS, I booted an Ubuntu LiveCD, opened GParted, expanded the partition to fill the space, rebooted back into the FreePBX environment, and went from there.

Looking at system admin, it is reporting that I am still at 60GB total of storage, as is “df” (system admin “storage” screenshot attached)

Filesystem                 1K-blocks     Used Available Use% Mounted on
/dev/mapper/SangomaVG-root  56767404 41513932  15253472  74% /
devtmpfs                     3992336        0   3992336   0% /dev
tmpfs                        4004716        0   4004716   0% /dev/shm
tmpfs                        4004716     8848   3995868   1% /run
tmpfs                        4004716        0   4004716   0% /sys/fs/cgroup
/dev/vda1                    1983056   201304   1662968  11% /boot
tmpfs                         800944        0    800944   0% /run/user/995
tmpfs                         800944        0    800944   0% /run/user/0

I feel like I’m missing a step, I just don’t know what. If needed, I can go back and provide a GParted GUI output to see if that helps at all.

https://linux.die.net/man/8/resize2fs

First make sure that you have a good backup, just in case.

That was close. Resize2fs exited with an error. Did some digging, and found that it would not have helped on my release (which is my fault. I should have provided more information). The good news is that I have backups of the server through Vultr (including the at-request snapshot)

Looks like I need to do the following:

https://wiki.freepbx.org/display/FPG/FreePBX+HA-Increasing+Volume+Size

Late night Shawn is a poorly operating Shawn. I’ll run this tonight after close of business and see how it goes. Thanks for pointing me in the right direction.

Keep us posted. Thanks

As requested, here’s what I did, step by step, in the hopes it helps someone.

Reading the “df” output (this is after I allocated the space, but let’s look past that)

Filesystem                 1K-blocks     Used Available Use% Mounted on
/dev/mapper/SangomaVG-root  89101228 32520396  56580832  37% /
devtmpfs                     3992336        0   3992336   0% /dev
tmpfs                        4004716        4   4004712   1% /dev/shm
tmpfs                        4004716    49808   3954908   2% /run
tmpfs                        4004716        0   4004716   0% /sys/fs/cgroup
/dev/vda1                    1983056   201304   1662968  11% /boot
tmpfs                         800944        0    800944   0% /run/user/0
tmpfs                         800944        0    800944   0% /run/user/995

“/dev/mapper/SangomaVG-root” was the mount in question. The LV had to be resized. The filesystem was using xfs, so resize2fs did not apply to this instance. I used some of the tasks in the previously mentioned link. Primarily lvextend. In my case, I did the following:

lvextend -L85G /dev/mapper/SangomaVG-root

Obviously, the 85G is the variable for the size you would like the volume to be. Adjust to your specific scenario.

At this point it is time to write the changes to disk

xfs_growfs /

Now, at this point, the machine will resize the volume, and if you perform a df, you should see, barring any issues, the volume has been resized.

Keep in mind, prior to this, I extended the partition that the volumes are sitting on to fill the capacity of the drive. Go about that however you may.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.