Task processor queue reached 500

I’m running freePBX 17 and have been running it for about 4/5 weeks now on a VM running 30gb storage and 12gb ram. I have around 30 sip phones and at most probably only have 4 to 5 max calls at once.

Today I decided to run the asterisk cli in SSH and I noticed a warning error that kept popping up every few minutes saying:

taskprocessor_push: The ‘stasis/pool-control’ task processor queue reached 500 scheduled tasks again

Having done some googling and reading up from what I can find some point this to the AMI exhausting the queue and not closing connections etc. is this the case? I have recently started using AMI via a PHP script to initiate an originate call so an extension rings then calls a defined number.

I was under the impression the that all AMI did was initiated the call and then closed the connection. Is there something I can do to make this a lot more efficient so it’s not creating this much congestion?

I read something about disabling write for AMI but not sure at all what that means.

Any help and guidance will be much appreciated.

I’d like to keep the system running smooth and reduce any unnecessary stress on resources

Thank you

I ran this command:

asterisk -rx "core show taskprocessors"

And this was the result:

stasis/pool-control    77937811   0   2467   450   500

Processed: 77937811
In Queue: 0
Max Depth: 2467
Low Water: 450
High Water: 500

I then rebooted the server and when it came back up. With system on idle (office is closed, no one using the system) and with my PHP script that calls AMI completely disabled, I ran this command when server came back online:

asterisk -rx "core show taskprocessors" | grep -c '^stasis/p:'

and the result showed 79 then climbed to 80 little while later.

Also monitored by running this:

asterisk -rx "core show taskprocessors"

And the total processed for stasis/pool-control seems like the tally is increasing even though system is pretty much idle with no one using it.

Can anyone please advise? Max Depth: 2467 seems like the biggest red flag and concerned when it will just climb again.

Would really appreciate some help trying to solve this.

Thank you in advance :folded_hands:

This is a symptom. Not a cause. You would have needed to provide the complete “core show taskprocessors” to show what actual taskprocessors were being heavily used. That taskprocessor is used for threadpool management, which is driven based on tasks being pushed to other taskprocessors. Thus generally a symptom of a problem, not the cause.

It is unlikely it was due to AMI.

This is normal. Taskprocessors and stasis subscriptions can come and go depending on usage.

This is also normal as the threadpool, which handles some of the subscriptions, get used. An idle system can still produce stasis messages as things go on.

1 Like

And are you using the Sangoma Desktop Client at all? There was a recent fix in sangomartapi to fix an issue in this area, previously over time it could cause this to occur.

1 Like

No i am not using Sangoma Desktop Client at all. Literally have a basic set up with approx 30 SIP Yealink phones with no voicemail.

I’m sure I followed a sangomartapi someones comment here to remove that by running:

fwconsole ma remove sangomartapi

But maybe it’s not properly removed?

I restarted the server as it’s a weekend and not in-use today and ran this command:

asterisk -rx "core show taskprocessors" | grep stasis/p:

And i basically have an stasis/p:endpoint:PJSIP/ for every extension like this:

stasis/p:endpoint:PJSIP/301-0000001d                                            1          0          1        450        500

I then have these four:

stasis/p:endpoint:PJSIP/dpma_endpoint-00000044                                  1          0          1        450        500
stasis/p:endpoint:PJSIP/SIPTrunk-00000043                                       1          0          1        450        500
stasis/p:manager:core-000000dc                                                436          0         56        450        500
stasis/p:manager:core-000000dd                                                221          0          8        450        500 

Then for each extension I seem to have a MWI but I dont use voicemail subscription. so is this using up the stasis pool?

stasis/p:mwi:all/401@default-000000b7						2          0          1        450        500

Monitoring it, using this command:

asterisk -rx "core show taskprocessors" | grep -c '^stasis/p:'

Seems to only be showing it increase. So it seems like the pool just gets clogged up. Isn’t running that command supposed to show the the processes in-use? so if that keeps climbing then the 500 limit will quickly be reached as it doesn’t seem to go down at all just keeps increasing

Is there anything I can do to optimize this? should I increase the 500 limit?

I have my PHP AMI script completely disabled and it’s not making any difference and I worry that come Monday where it’s normal day, the pool will clog up again.

It shows the taskprocessors that have been created. As I stated before they can come and go depending on circumstances/usage/etc.

This is an incorrect statement. The “500” warning limit is queued tasks in a taskprocessor. Not the number of taskprocessors. You could have 2000 taskprocessors and it could be perfectly fine. Or more. I’ve seen perfectly functional systems with more.

Out of the bits of you’ve provided a manager subscription hit 56 in queue at one point, probably at startup. Otherwise stuff is idle.

No. You can’t optimize it, and you’d end up having to modify source code to change the limits. Without the original “core show taskprocessors” to show what was likely the REAL cause, there’s nothing else that can be done.

I just analysed the daily full log files and searched for the stasis warning and observed the following pattern:

  • The warnings start appearing daily around 8:55 AM, before the office opens at 9 AM.
  • They continue throughout the working day, often just minutes apart.
  • The warnings stop appearing after office hours (around 5 PM).
  • On one occasion, the logs showed warnings continuing until 9 PM.

No warnings appear on weekends or during inactive periods.

Combined with normal office-day call activity, voicemail interactions, device registrations, and FreePBX modules seem to generate enough real-time events to overload the stasis message pool. Will this have a performance issue on the system or effect end-users ?

The issue is clearly tied to active hours and real-world call flow, not rogue scripts or background jobs.

Should I leave it as it is or is it advisable to increase the 500 threshold? I have 12gb ram on the VM with 2 VCores so would it be advisable to increase this? Or can I somehow turn off unnessary tasks that use up the pool threshold?

All I am using the system for is SIP Yealink phones to dial internal and external. Only two dedicated extensions have voicemail to email but they aren’t extensions that have a SIP phone. I am just using them as a Voice Mailbox account. All other extensions have voicemail disabled per extension.

Probably not? FreePBX by default doesn’t enable the functionality that reacts to taskprocessor overloads. Stuff still gets processed in PJSIP.

I would really really really need to see “core show taskprocessors” output when it happens.

Without modifying underlying source code and recompiling Asterisk, you can’t adjust it.

Ok, next time i see the warning which is probably going to be any time during the week day, I will post results of the “core show taskprocessors”

I came across stasis.conf and there are lots of lines in there commented out to disable certain types of messages. So is the pool queue just messages? what would be the purpose of the message? is it for logging? should they all be kept on?

Also, why do all my extensions show as MWI even though i have voicemail disabled on all except 2? I dont really use MWI. are these wasting the pool queue?

*** UPDATE ***

I asked AI and it suggested I check if these are enabled first:

fwconsole ma list | egrep 'ucp|restapps|sangomartapi|zulu'

then if so, disable:

fwconsole ma disable ucp restapps sangomartapi zulu

Then Remove mailbox= from non-voicemail extensions
Not sure what it means by remove mailbox=

And finally:

Trim hints / BLF from unused devices

Not sure what it means about this either.

Don’t. Touch. This.

If you don’t know what you’re doing, you will shoot yourself in the foot and stuff will stop working. The stasis message bus is the internal message bus in Asterisk which raises all kinds of messages. Call is placed? A message. Hung up? A message. Endpoint registers? A message. It is NOT logging. Logging is separate. Leave it alone.

No. Having taskprocessors isn’t inherently the problem. Messages on them matter, and it is unlikely that these are a problem.

It’s your system so you can try to use AI if you want, it’s your time. I won’t comment on it.

To put all of this into a non-technical perspective:

A river is flooding that we don’t expect to flood
We haven’t looked upstream to know why it’s flooding
Would making the river wider do something? Maybe, or the source of the flood just gets worse
Can we change what puts water into the river? Sure, but there’s tons of sources and we don’t know which one is the source of the problem

1 Like

Thank you.

Next time the message pops up I’ll share the full core show taskprocessors

It will probably be on Tuesday or Wednesday when I’m next on site.

Is it worth disabling modules specially the commercial ones that and even the free ones that I’m not really using? Isn’t it just taking up potential space in the stasis for no reason?

I don’t know how they interface or what they use, so maybe - maybe not. They would primarily act as consumers of the messages, and would need to actually be connected to most likely the Asterisk Manager Interface in order to matter, and if AMI was a problem you would have most likely had messages about a manager taskprocessor.

1 Like

You’ve got @jcolp responding to you on a Saturday so why ask the village idiot (AI)…? :slight_smile:

1 Like

So over the weekend I disabaled modules that I am not using (mainly commercial ones) and things like WebRTC and UCP as I dont use them. I went through quite a few and disabled the ones i know I am not using.

I then rebooted the FreePBX server and so far today (Monday - end of day) it seems it has generally been a good day. I ran the “core show taskprocessors” and so far the max depth value is at 73 and in the full log i dont see any stasis warnings.

I am not sure if rebooting the server has helped this. or is it a coincidence? considering the server uptime before this was about 4-5 weeks approx (but then again, should servers really need rebooting? specially since i’ve got 12gb ram on there and not even using much of it)

I also disabled the built-in firewall as the system is behind a hardware firewall on a LAN. Not sure if that has made any difference.

I also had an iptables TEE rule which temporarily I removed but once again not sure if that would have been causing the stasis pool warnings.

I’ll monitor it over the next few days. But for sure last week ever working day the stasis message was showing every few minutes apart.

Unknown. Could have been some of that, or just a coincidence, or something else changed.

1 Like

i’ll monitor it this week and see how it goes. it’s a shame couldnt get to the root cause. Just a quick question regarding the Max Depth value in the core show taskprocessors, is this the max value the queue has ever reached since the server has been up? so the minute this warning was to get triggered would mean that value would for max depth will be 500 or above? and only rebooting clears that ? just checking that I understand.

If that’s the case then keeping an eye on that value to make sure it’s not going too high might be a good idea ?

The message occurs when the queue reaches 500 or more.
As the queue is processed, the number can go back down below. Or if something is just flooding it with tasks, it could remain above.
Rebooting “clears” it in the sense that you’re starting fresh.

Yes.

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.