New FreePBX 17 bash installer - limiting permissions

Thank you for the new installer. It is a great leap forward beyond the ISO days and lends to much better fine tuning, community contribution, security hardening, ansible-ization, etc.

Regarding security, and specifically some of the permissions granting, this jumps out as perhaps a bit too much:

# Adding asterisk to the sudoers list
echo "%asterisk ALL=(ALL:ALL) NOPASSWD: ALL" >> /etc/sudoers

That line basically turns the asterisk user into root. Before filing a bug and patch for removal, is there any compelling reason to keep it ?

Here’s another one that could use a look:

chown -R asterisk:asterisk /etc/ssl

For the love of Eight Pound, Six Ounce, Newborn Infant Jesus… why would you put asterisk in the sudoers file.

Ah yes, the classic “let’s give our web server the keys to the kingdom” configuration. To those writing this stuff, here’s why this is such a stellar idea:

  • Security? What’s That? Why bother with pesky things like passwords when you can let your web server, the one directly exposed to the wild internet, run literally anything it wants as root? It’s like leaving your front door wide open and hoping those friendly internet bots don’t decide to redecorate.

  • Targeted Attacks? Pfft. Sure, the asterisk user was intended for telephony stuff, but who says a hacker can’t use it to download a crypto-miner instead? Or maybe they’d prefer a fun rootkit to play with for long-term entertainment?

  • Accident Waiting to Happen: One wrong command from your the UI like the asterisk cli module, or one misconfigured script, and boom! You’ve accidentally reformatted your hard drive. Don’t worry, backups are for the weak anyway, right?

  • Job Security (For Hackers): With this setup, you’re practically guaranteeing that some script kiddie will waltz in and leave a little “surprise” for you to discover. They’ll be so grateful for the easy access, they might even send you a thank-you note!

In case you haven’t detected the sarcasm, here’s the serious bit…

This line in the sudoers file gives the asterisk user and your Apache webserver process unrestricted, passwordless root access to your entire system. It’s the cybersecurity equivalent of driving a tank full of gasoline through a fireworks factory.

Bug Filed: [bug][SECURITY]: sng_freepbx_debian_install · Issue #23 · FreePBX/issue-tracker · GitHub

True, but you must admit that if you are lazy, it makes things much easier (provided nobody notices) :wink:

1 Like

Looks like @kgupta pushed an updated script without the sudoers

Just submitted PR for an ansible-ized version of the installer to hopefully help catch these things before they happen in the future.

1 Like

So first they publish a install script giving the apache user full root rights and now they leak the Private Key for their Packages. [bug]: "Public" key on · Issue #25 · FreePBX/issue-tracker · GitHub

Ughhhh what security practices are being followed anymore?

1 Like

Hi @penguinpbx

Thanks for the feedback.!
This is why we open source things, so others can review and post suggestions.
Great meeting you at Astricon! I am a big fan of Ansible and as you can see
from the installer script, one can easily re-write it into ansible.
That would be a great community feature :slight_smile:

As for the security issue, I completely agree and I as indicated team made the change.
Please keep the comments coming.

In spirit of open source and transparency we ask for constructive comments.
I see that some cannot wait to take a cheep shot at all the work we have done to get us to Debian.
Even though that some of that ported code is from the old days.

Lets all be constructive and respectful please.
Comment on code and not the people.



It is weird to take a jab as you are giving commentary on cheap shots. I am confused which of the two items here were old code? Was it the apt key or the just written and released installer script?

Just want to clear up in the spirit of transparency which of these security issues were accidentally (obviously nobody would port over a security issue on purpose) ported from the old days. Fun fact github allows you to import git repositories to maintain history so you can know where an issue originated. Probably a good idea to do that so we can see the origin of this pesky insecure code that is magically showing up and fix it. That way if something has been happening for years we know what old systems were at risk.


May I submit that I think some of you guys are being, well, jerks about this?

This is beta code is it not? It is pretty common in software development to develop beta code in an environment where all security is turned off - essentially running as root. That way your developers are actually focused on fixing REAL code issues not chasing down a rabbithole that’s caused by someone forgetting to chmod a+RW a scratchpad directory or some such.

Get the code working first, then secure it is common operating procedure among devs. It’s better to have a full-time security officer who’s JOB is breaking into stuff, audit the code after the fact with proper tools. Yes yes I know we would all be in Eden if all programmers who wrote code were security experts and didn’t use APIs without boundary checking on user interface input and suchlike, but we aren’t.

I’d also point out that having the FreePBX configuration interface accessible to on your internal LAN is flipping insecure and stupid anyway. That should be locked down to only the IP addresses used by your IT group at the very least. Even Cisco got nailed by that vulnerability with their older UCM/callmanager interface which has unfixed vulnerabilities in it.

Let’s be honest, treating cybersecurity like that annoying task you keep putting off is a recipe for disaster. We’ve all seen the news about breaches and hacks that cost companies millions, ruin reputations, and leave customers out in the cold. The old way of slapping on security measures after our software is already built is like trying to stop a speeding train with a band-aid.

Why is a security-first approach so superior? Think of it like this:

  • Baking It In, Not Bolting It On: Imagine building a house without a solid foundation. You wouldn’t just add some extra support beams later and hope for the best, would you? Security is the same way. Integrating security into every stage of design, coding, and testing makes your software fundamentally resistant to attacks, not just patched over weak spots.
  • Prevention is Cheaper Than a Cure: A security breach is a nightmare. You’ve got downtime, lost data, remediation costs, and a whole lot of angry people to deal with. Investing in security upfront saves you a world of pain (and money) in the long run. It’s like spending a bit more on a quality lock for your front door instead of dealing with a burglary later.
  • Move Fast, Stay Secure: We developers love getting things done quickly. Security-first doesn’t mean slowing down to a crawl. By automating security testing and making it part of our standard workflow, we can ship code faster while knowing it’s protected.
  • Peace of Mind for Everyone: Customers don’t want to worry if their data is safe. Businesses don’t want to be the next headline for a breach. A security-first approach builds trust from the inside out, giving everyone from devs to company execs a good night’s sleep.

Let’s be real, building secure software isn’t always the easiest path, but it’s the only responsible one in a world where cyber-threats are growing every day. It’s time to make security a core part of our DNA as developers, and it starts with ditching that outdated “security as an afterthought” mindset.


You missed reading my post :slight_smile:

I also pointed out that there’s value in having someone who specializes in security looking at this instead of depending on your random developer doing it right.

I do get where you are coming from, you are missing though that ALL disciplines would be FAR BETTER if the people involved in them knew more about how their little piece of it fit into the big picture. But this requires people pull their head out from time to time and actually spend time learning about things outside of their core competency. And unfortunately, companies don’t want to pay to send people to class on company time when they could be working - and most employees are too shortsighted to be willing to put time into learning anything on their “own time” after work. That why we have things like the Champlain Towers collapse where the people pouring the concrete for the pool deck during construction didn’t slope it for rainwater drainage since all they knew was pouring concrete for pool decks, and apparently didn’t know that water doesn’t flow uphill.

I’ve worked in IT at companies where programmers writing NETWORK APPLICATIONS didn’t know what the difference between a TCP and a UDP packet was - and very few of them had any inclination to learn. It isn’t just security that is ignored by the guy buried in his programming hole writing a function - it’s every last thing else.

You and I are outliers - willing to spend our precious “off work time” actually learning about things we work with. But the majority are not like this - their “off work time” is generally spent watching Lost reruns…

I don’t have an answer other than to say “Your Right” and then go back to work, and if I can get 1 or 2 security discussions lasting 10-20 minutes out my IT team a week I count myself lucky…

Treating security as an afterthought is an invitation to disaster. Have we learned nothing in the past 15 years?

That is a simplification of a complex issue that is so bad it becomes meaningless. It might make a nice bumper sticker, though.

Security has always been a balancing act that starts by understanding basic risk/reward scenarios used by the criminal mind. Being “secure” by definition is being at a place where the amount of effort to gun you is so high as to not be worth gunning you. This level is different for different installations and pretending it’s all the same is ridiculous.

A criminal seeks the largest payoff with the smallest effort. A cracker starts by running a bunch of canned scripts against as much of the Internet as they can access. Since this is all automated it’s minimal effort. Since the canned scripts are written by someone else and just picked up for free by the cracker, once more, minimal effort.

So if you don’t have anything worth stealing - like for example a FreePBX system that runs 6 extensions in your house, all you have worthy of theft is 1 CPU unit that could be used as part of a DDoS attack against someone else, or used as phishing against you. And the only protection against phishing is user education - teaching people not to be stupid in the digital age (No Grandma, you won’t get $5,000 if you send the nice man $50) The tightest and best security can be defeated by stupidity of the person using it.

The cracker isn’t going to spend 100 hours breaking into your system to obtain that 1 CPU unit because there’s always going to be someone else on the Internet with a system that requires far less effort to break into. In fact, the cracker is very likely going to come up with so much low hanging fruit from his canned script that he didn’t write, that can simply be automatedly broken into, that if your low-value FreePBX system is secured against all known canned scripts - you are, by definition, “secure” Because, you will never be cracked since the effort isn’t worth gunning you. So you are perfectly “secure” when you treat security as an afterthought.

However, if you are for example a bank, government, or other large org that has significant amounts of “stuff” worth stealing - that cracker may not only spend 100 hours focused on your systems once he finds them (which is trivially easy to do in most cases) trying to break into them - he’s going to likely use “off book” methods - custom written attacks (assuming he knows his cracker trade and isn’t just a wannabe) to attempt to break into them.

So a single individual only needs to keep current on the latest security patches - or completely isolate his system from the Internet - or otherwise take minimal security precautions, while a bank needs to do that plus a whole lot of other stuff. All the single individual needs to do is be “more secure” than someone else. But the bank needs to be “secure” that is, which for a bank is likely technically impossible because the payoff of breaking into a bank is so incredibly high that people will spend an almost unlimited amount of time and effort trying to break into them. And the fact is - we still read about bank robberies happening even today -and infosec cracks of them. A bank most definitely cannot treat security as an afterthought.

People react to security threats as they become known. For example at one time we had these things called indoor malls that were popular. These fell out of favor and got replaced by stripmalls. However, stores in stripmalls are insecure when it comes to smash-and-grab thefts. Thief steals a car, drives it into the glass front of the store at night, runs in, steals tv sets or whatever, runs (or drives) off with an accomplice. So stripmall stores started reacting by putting up concrete barriers (why do you think Target has all those red concrete balls out front?)

Stores realized that increasing security makes the store harder to access and more unfriendly and difficult to access, so competition forces them to strike a balance between security and ease of use.

Software makes the exact same balancing between security and ease of use. But that balance is different for different installations. A bank’s FreePBX system has to be harder to access online than your home FreePBX system - so it’s wrong to make a blanket statement like you did and assume that it’s the right thing for a FreePBX system used in your home to be as difficult to use as a FreePBX system in a bank.

So, let’s just brainstorm here: If I were to, uh, hypothetically engage in some questionable activities, say, targeting a bank or some other colossal entity, I’d obviously opt for the subtle approach – like, rent a botnet, because, you know, who’s even keeping an eye on thousands of small web servers and a handful of extension phone systems? Totally under the radar. And hey, why not sprinkle in a bit of phishing for some quick cash? Oh, wait, these PBXs can make legit calls? Huh, interesting. I wonder how many “legitimate” calls I can loop through to my pals in the Caribbean before anyone bats an eyelash. I’m sure they’ll appreciate me and offer a kickback from the fees they rake in. It’s just so comforting to know that folks believe security measures are exclusively for those “important” servers.


What a lively thread, thanks y’all! :cowboy_hat_face:

Good news is that we can probably let the thread RIP :headstone: as @jfinstrom filed a formal bug that @kgupta @ncorbic and team fixed in a very open manner. :pray: This is a process that the project has lacked for several months and surely many can find at least some satisfaction with this recent improvement.

But before the thread goes under, it seems appropriate to break out a couple of good parts into new, separate threads:


Thanks @penguinpbx for the report.