I made a stupid mistake and let our server get full from a packet capture. I got reports of big problems, especially attempting to use the park module, and I immediately realized my mistake. I couldn’t access the gui, and since I had the logs in the temp folder, I just rebooted the server and it seemed to come up OK. I was able to access the gui fine this time, so I looked in system admin-storage. The boot drive has plenty of space, but the other drive is completely full. Things quickly detiorated again and now I can’t access the gui anymore. But running ls -alh in the root folder doesn’t give me any clue where the storage is going. Also, we have raid on this system, 2 100 gb drives. I’m attaching a screenshot I took from the system admin. Also, asterisk is not working properly now.
“Standard procedure” is to log into the console and delete the offending file on purpose. If it’s not in /tmp, it’s possible that it ended up in “lost+found”, which means that all of the space is still allocated even though you can’t do much of anything with it.
While rebooting “should” clear the /tmp directory, the action is not guaranteed. You really need to do it yourself.
Use the ‘df’ command from the console to verify the actual status of the disk. The GUI’s time granularity is not “real-time”.
Strange. I ended up figuring it out on my own. But the culprit was a file named newfirmware in my tmp folder. With the size of about 64gb. I’m pretty sure it wasn’t there earlier when I was looking. Deleting it brought my disk usage to 39%. It doesn’t seem to be filling up again, but I find that file strange. It was so large, with the name newfirmware, in the tmp folder after multiple reboots, and I couldn’t find it after many ls -alh commands. Only when I ran du -s * | sort -n, did I find that the tmp directory actually was full. Contrary to everything else I had been running. Any ideas? Is it possibly an EPM thing? I’m only thinking that because of the name of the file.