Mysql error eating up my computer - htop

Disable fop2

service fop2 stop

@dicko it’s ucp but not the process it’s someone actively looking at calls in ucp

I’m fairly certain no one at that location uses the ucp or even knows about it. They do use fop2.

UCP and UCP node server are disabled. I ran service fop2 stop and then started it again. I still see the mysql process popping up with high computer % but it doesn’t seem to stick at 130% forever like it used to.

While I disabled UCP and UCP node, did I need to stop a ucp process before disabling it?

Are you using Distro 7 (Freepbx 14) with Asterisk 13?

No he’s on FreePBX 13.

We have had MYSQL work horribly slow before when a table got corrupted. We had to run the mysql table checker (Can’t think of name right now) to fix it.

I’ve run these two mysql repairs so far:

mysqlcheck -u root -p asteriskcdrdb --auto-repair -c -o

mysqlcheck -u root -p asterisk --auto-repair -c -o

Not sure if there is another one I should run or not

If you have any processes running that you are not wanting, then bludgeon them away with (as root)

kiil -9 (pid)

you can do that at any time.

There is no indication here that any databases are corrupt, if they where, mysql is unlikely to complete its start process, which includes checking all databases for corruption.

killing a pid in mysql won’t corrupt the mysql database in any way?

No, if the process has hung for that long, it is obviuosly not doing anything useful. If your nervous just stop mysql , Asterisk has no need for it, so your system runs as ever, probably smoother.

At any time, you can use mysql’s ‘show process’ procedure, profiling mysql might help you here

https://dev.mysql.com/doc/refman/5.5/en/show-profile.html

so you will need to pre-prepare your system , then watch for all the WTFs

It does seem to be related to fop2 in some way. When I stop fop2, I still see the mysql command but it doesn’t eat up 130%. It might bounce to 98% but then drop back to 1% then go to 0% then to 70%. With FOP2 running it just stays really high and there are often 3-4 of the same process high.

Might be the fop2 call history plug in looking into the system and grabbing call history. Would be similar to ucp looking at calls.

I believe I previously identified a column that was unindexed in asteriskcdrdb that caused huge slowdown in looking at history (I can’t remember which one I added the idx to) but the profiling would identify the culprit and adding an index to that column, hopefully a resolution.

mysql asteriskcdrdb -e ‘DESCRIBE cdr’

(You probably don’t have a root password (go figure) , if you do add -pyourpassword)

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.