System Firewall is ver 188.8.131.52
I recently (4 weeks) upgraded from FreePBX 13 to FreePBX 184.108.40.206.
Everything was running smooth the first three weeks (no issue)
Approx 3 weeks and 5 days from the upgrade, the firewall somehow get disable at night.
No issue re-enabling it. All setting are still there.
But the next night its disable again,
Appreciate any info
I think it is an issue with the firewall module. The same thing happened to me and i had to downgrade the version to 220.127.116.11 to get it to stop
I have one install of FreePBX Firewall 18.104.22.168 where the firewall also keeps turning off for no apparent reason. I have now rolled back to FreePBX Firewall 22.214.171.124
Did you try updating to the latest edge version? I find CLI much easier by running
fwconsole ma upgrade firewall --edge
and with the firewall, i would upgrade certman using the same
fwconsole ma upgrade certman --edge
This helps with the LE validation SSL issues as well.
Originally, we had the automatic System/Module/Security updates enable in Module Admin.
That broke stuff.
Then we switched on only Automatic Security Updates.
That broke stuff.
We have manually applied Asterisk updates when there had been no reported issues.
That broke stuff.
Now we wait until a week to10 days and watch the forums for issues.
Then we use the following CLI script on our office Test PBX and run it for a few days prior to updating production systems.
yum -y update
fwconsole ma updateal
As these are production servers, I won’t run the edge track.
This Firewall bug didn’t show up in the forums, nor did it show up in our own testing. It reared it’s head when letsencrypt ran on two production servers. (The problem has now shown up on two servers)
We have experienced many problems upgrading production servers over the last year and are very careful when rolling out upgrades.
This latest bug caught us by surprise. We have found the new bug to be when Letsencrypt runs, it turns off Firewall 126.96.36.199.
Downgrading the firewall to 188.8.131.52 “seems” to have resolved this problem but it is too early to tell as of yet.
I’ll update this post if the downgrade stops the firewall deactivation issue.
I’ve had the same issue for the last several days - check the channel status and I’m being hammered by hackers. The certificate update process is dying with an error, and it seems to be taking the whole firewall down with it.
I’ve temporarily solved the issue by updating the certificate update cron job (/var/spool/cron/astrisk). The default command is:
4 1 * * * /usr/sbin/fwconsole certificates --updateall -q 2>&1 >/dev/null
I just appended the following to the end of that line:
; /usr/bin/fwconsole firewall start
This produces a VERY brief firewall outage, so very few malicious connections hit during the gap.
you need to see what the underlying problem is, from a shell,
su asterisk -c 'usr/sbin/fwconsole certificates --updateall '
should return within a minute or less, running the same command ‘fwconsole’ as the user ‘asterisk’ leaving out the -q (quiet) and the output redirection of stderr into stdout into the bit-bucket should expose any problems.
Open a ticket even if resolved. It’s too important an issue to hope the thread is noticed.
Make sure both the firewall and certman modules are current.
I don’t really understand the approach FreePBX uses to open up for LetsEncrypt. It’s a full firewall restart, which means all rules are wiped, rebuilt with the LE pinhole, then wiped again and rebuilt without the pinhole. If something goes wrong the restart system is wide open.
The restart is nonsense. I’m sure they wanted to use existing sysadmin logic to bridge the permissions gap, but a simple add/delete of a single rule is all that is needed. If something goes wrong the most that should remain open is http access to the LE folders.
There’s already a ticket, and they have put it on the low priority “we’ll talk about it on Monday” pile.
I did that, didn’t include the info because it’s only helpful to the maintainers. But basically it’s “REMOTE_ADDR didn’t parse” when updating the certs
Also, I hope to heck everyone is using the VOIPBL blacklist in conjunction with the adaptive firewall. It cuts down malicious connections by more than 99% for me.
Even after restarting the firewall I’m getting dozens of attempts in just a minute, until I reload the blacklist which drops them to 0.
Seeing several reports of this issue now. I’ve closed a few duplicates and will track with this one:
I would caution that VOIPBL is mostly the ‘script-kiddies’ list, the more sophisticated operations ‘duck and dive’ being blacklisted by never coming from the same place twice ( well not very often)
If you don’t listen on UDP/5060 your attempts will similarly approach zero, using TCP transport even lower, but if you just use TLS on the standard 5061 port then you are way better protected than even whitelisting only known hosts with all the headaches that causes.
Shodan and easy whole-IP-range port scanning pretty much negates “security through obscurity” for VOIP protocols. TCP/TLS is absolutely a great step, but changing to 5061 is only really effective against the script kiddies, too.
I’m just running a personal PBX, so the stakes are low. I also keep a window open showing active connections full-time, so every time I get a scan that’s not in the blacklist, I add it. While I run very tight adaptive firewall/intrusion detection settings, without VOIPBL I literally get constant hits. With it, I get a couple hits a week. Personally, I consider it to be essential.
I wonder if they ever actually tested a cert renewal or only tested new certs from the GUI…
It looks like the “restartFirewall” firewall class function called is intended to be used from the GUI. They need to break it out to separate disable, stop and start calls - essentially duplicating what fwconsole does.
They should also look to moving the “
$api->LE_Rules_Status("enabled")” call a few lines down AFTER they schedule the
at restart job, As it is now, they attempt to enable the lerules, the process bombs out, but the
at job to restart disable the le rule hasn’t been scheduled yet, so the system is left wide open.
Luckily it should be pretty simple. Move one line in Certman.class.php and and replacing another line with three in FirewallAPI.class.php.
Anything port scanning your machine should cause a ban within a very few attempts from the same IP, unless requested it would never be legit.
My point is that if you just don’t listen on UDP/5060 you just wont see anything you have so far needed to "blacklist’
I guess we’re going to have to admit to a core difference of opinion on that point then. While I agree it will seriously cut down on indiscriminate sweeps (most of the aforementioned script-kiddies), it provides almost no protection from any serious attacker.
how many of the attacks you have seen were sent to your domain name versus your ip address and what certificates are you using?
I’ve always had the opinion that obfuscation does have value as long as there is no false sense of security…
The overall rule set and other measures need to be secure with or without it, but cutting the log volume by a few orders of magnitude make finding and dealing with real attempts much easier.
I also feel it probably encourages some potentially serious attempts to move on to the next IP, where if they got the first glimmer from the standard port they might come back and try harder.
There you go, if you just don’t accept connections to your IP then you are 99.999% there. The other 0.001%, I can pretty well guarantee are NOT on any blacklists