QA, let's talk about it yet again

The way that this[1, 2, 3, 4] happens is pretty easy to speculate: the FreePBX QA process is only done in limited scenarios. In this case, a scenario with Sangoma DPMA endpoints deployed. Obviously no new install was attempted. Otherwise the configuration issue would have been quickly discovered. I guess someone ran an apt update && apt upgrade on an existing install (complete with DPMA endpoints of course) and called it good.

As often as QA has been complained about on this forum, no one has ever given a decent answer as to how QA is actually performed. Everything is just hand-waving, “yes there is QA.” Edge repo had a policy at one time, now who knows, something might stay in edge for weeks and some things just skip right to prod. Does edge come after QA or during the QA process? Is there any kind of checklist before advancing modules to prod? What about software released through apt? How long does that go through QA until it is released?

 


[1] FreePBX 17.0.19.28 breaks when upgrading Asterisk from 22.4.1 to 22.5.0-1.sng12 - #2 by jcolp

[2] ISO FreePBX 17 SNGDEB-PBX17-amd64-12-10-0-2504-1.iso and SNGDEB-PBX17-amd64-12-8-0-2503-1.iso borked install -> endless loop

[3] Failed to load module res_pjsip_endpoint_identifier_dpma.so

[4] Freepbx install failing - #23 by memxit

4 Likes

Would be good to know the process for peace of mind.

Hi All,

As the owner of FreePBX QA at Sangoma, I’d like to take a moment to explain our QA process.

Firstly, I acknowledge that we missed testing the new DPMA binaries on a fresh version 17 installation. Our initial focus was on validating upgrades on existing systems, where we confirmed that DPMA functionality continued to work as expected after the update.

To address this gap, we’ve updated our test plan in Testrail (a tool we used to document our FreePBX test cases) to include additional test cases that specifically validate all new binaries (whether DPMA or Asterisk) on fresh installations, during Asterisk version switches, and ensure smooth upgrades on existing systems.

Regarding EDGE vs STABLE:

As outlined in our wiki, EDGE modules are generated immediately after a developer pushes a fix that has only undergone unit testing by the developer.

QA then picks up the EDGE module, reviews the list of fixes included, and performs basic sanity checks to ensure the fixes work as intended and do not break existing module functionality.

Once validation is complete, the module is promoted to STABLE, making it available for production environments.

Following promotion, we also conduct post-validation to ensure modules upgrade properly from existing STABLE systems—through both the GUI and CLI.

The time frame for moving a module from EDGE to STABLE typically spans around two weeks. This may vary depending on QA team priorities, the number of fixes, and overall workload.

Additionally, based on the complexity of a fix or its potential impact, developers may involve QA even before pushing a module to EDGE. This early collaboration helps prevent regressions and ensures stability in EDGE environments.

Please feel free to share any suggestions or ideas you may have for further improving our QA process. We’re always open to feedback.

Best regards,
Kapil

4 Likes

Please outline this unit testing since the design of FreePBX still doesn’t lean to proper unit testing. Nothing has built in test units, there’s no official PHPUnit integration or CI. The BMO, database and even the massive $freepbx class is global scoped which makes it harder to do proper unit testing. There’s too much reliance on dynamic magic methods which makes this harder to do. Not to mention all the core modules (core, IVR, ringgroups, etc) are still reliant of legacy (pre PHP 5.x) coding which does not play into useful or easy unit testing.

So yeah, please describe how this unit testing is actually done since FreePBX isn’t designed for it.

Can you also provide the reasoning behind yourself and other Sangoma developers driving the general public to the EDGE repos as a fix for a problem if there are also these processes to be done to validate what is in EDGE? Because it seems irresponsible to just tell people “the fix is in edge, use that” if there is this entire two week process to go through.

A lot of this is hard to believe considering that in 2025 alone I’ve seen numerous instances where a PR was submitted then accepted and a new release was in stable within 24 hours of the PR being submitted. In other words, I’ve seen quite a few instances where this entire QA and EDGE timeline hasn’t been used and the “stable” release happens in with days of a the changes being submitted.

Finally, why don’t the Sangoma developers follow the exact same development outlines as the rest of us? I’ve yet seen a single Sangoma developer actual submit a PR they just do direct submissions and there’s not a single public review of the changes being made.

It just seems that there’s no real evidence that this outline of processes you gave is actually being followed properly or even when it is it is not consistent.

Hi @BlazeStudios

Please find my inline answers -

Unit testing in this context refers to the developer manually testing the fix—essentially the developer verifying their own changes. It does not refer to automated unit tests run through a testing framework.

I’ve seen numerous instances where a PR was submitted then accepted and a new release was in stable within 24 hours of the PR

This happens when a fix is important enough to be pushed into the STABLE release as soon as possible. Of course, every fix pushed to STABLE will still go through the full QA cycle.

I’ve yet seen a single Sangoma developer actual submit a PR they just do direct submissions and there’s not a single public review of the changes being made.

For all the open source modules PRs are getting raised in the Github.

Also, Historically, EDGE is not intended for use in production environments. It’s meant for those who want to try out fixes as soon as they become available.

Please let me know if you have any further queries.

Best Regards,
Kapil

Please understand that Unit Testing has a meaning. Unit Testing and manual testing are completely two different things. So when claiming that Unit Testing is being done it implies that there is automation and testing is done in a very specific manner. The reality actually is there is zero Unit Testing being done, it’s all manual. Those are very important distinctions.

1 Like

agreed , that’s why clarified in my next replied that its purely developers own testing to avoid any confusion.

Best Regards
Kapil

Whatever you are doing, regardless of the number of paragraphs written about it, isn’t adequate. Significant issues are being missed that are causing phone system outages for businesses.

You need to decide if you want to be a consumer plaything or a business tool. If the latter, QA must be better, and the “you have to prove it’s a bug and give us unsupervised access to your system” approach needs to get modified.

Part of my drive for creating this topic was that a client for whom I recently set up FreePBX 17 saw me struggling through several bugs and witnessed some himself, then asked about my confidence in the product and whether there might be any alternatives. First time I’ve been asked that in years of installing FreePBX. You can only say “upstream” so many times; as installer you own it. I would really like things to “just work” as they once did, and I believe that was true because testing and QA were of better quality in the past.

4 Likes

I first encountered the term Unit Testing over 50 years ago, and automation was never mentioned as essential then, although repeatability was important, and there would have been a script (which is actually often the best (most complete) documentation of what a program is supposed to do).

Also, I think this thread is more about QC than QA.

I am working through unit testing on FreePBX… When I get a moment I will see if I can document it. A lot of the current stuff (not all) I am doing in FreePBX is unit tested. Note I am also using proper code patterns in my new stuff and decoupling as much as possible to help make testing easier. In any case If someone has basic knowledge of FreePBX and can master google they can unit test the newer versions of FreePBX. Of course this takes time and development cycles. Assignment of that time is up to management and those in charge of the purse. Note this doesn’t really cover the front end. That is a whole different mess with browser automation.

What has been repeatedly asked for are test plans. If you are manually testing you typically have a test plan to say

Module depends on step A, B, C before starting.  (nuke data, set up other item)
Following steps between each test A, B, C...
1. Do FOO, expect BAR
2. Do BAZ expect QUX
!!Module effects other module!!
Other module 2 do FOO expect  BAR
......

AKA instructions for human based implementation testing. These can then be converted to some kind of front end testing framework. Maybe even vibe coding with AI

One other thing I recommend to reduce QA/Coding errors in code is static analysis. I use GitHub - vimeo/psalm: A PHP static analysis tool for finding errors and security vulnerabilities in PHP applications which initially is very painful but after you clear out the initial batch of issues future development is pretty smooth… This also requires using types which is supported in recent php versions.

This all sounds good, but is practically impossible to do comprehensively with the code base in its current state. Of course one could come up with unit tests for a few methods here and there, but there are absolutely critical lines of code that haven’t been touched in almost 20 years.

The root of the problem is – as you know – that Sangoma is not going to commit to the year or two it would take to modernize the code because there’s no short-term financial reward in it for them.

I can’t even imagine what Psalm or PHPStan would make of this thing lol

1 Like

I would be happy to slowly refactor, decouple things and add tests. What I won’t do is waste time and effort. My PR’s recently have been ignored or summarily rejected for some reason usually siting things probably generated by AI… “how can we reject this without it looking malicious”. Basically I won’t put in effort where it is unwanted

I would also like to note that any of the FreePBX Developers can contact (email/dm) me with questions on what, where, how or why and I will be happy to answer questions having almost 20 years in the codebase, when I left I promised the lead at the time (Matt Frederickson) I would be happy to help as a resource.

1 Like

My concern here is Kapil is telling us the developer who wrote the fix verifies it and thats that, thats where it is wrong, the dev who writes the fix, then verifies his/her fix, then should have it verified by another - senior dev as correct who then pushes it to edge/stable/where-ever.

Whats the point of a bad dev writing a bad fix, and signing off on his/her own bad fix

it’s kind of like council workers, do temp road pot hole fix knowing it will break up and have to be done again and again and again - justifying their own continued employment.

1 Like

That’s a bad analogy as it reflects a popular misconception. It’s actually done that way for a couple of reasons: firstly a complete remake of the surface expensive and involves a long closure of the road.; secondly a lot of potholes develop in Winter, but permanent repair can only be done well in warm dry weather. As such it is deliberate policy to make temporary repairs until the road gets to about 15 to 20 years since its last makeover (for a busy road). Even then, I think that is for the surface, and a deep reconstruction is only even less frequently.

In software development terms, it is more like an analogy of “technical debt”. It’s different though, in that in software it accumulates through design changes, which are things you do not know about in advance, except that there will be some, and is often just outside the planning horizon, whereas for road repairs, it a repeat of a well known process, which is planned for, although politics and budget constraints may mean it doesn’t get done as soon as originally intended.

1 Like

Step 1. Reproduce, hopefully the person creating the ticket gave enough information
Step 2. Attempt Fix
Step 3. Try to reproduce
Wash, rinse, repeat

Getting into the weeds about how to QA code, whether QA is the correct term, etc. is a distraction. I don’t know how to design, build, or make sure an airplane is correctly assembled before I get in it. The (very reasonable) assumption is that this all gets properly done. Boeing doesn’t ask “How would you have built the wing?” or “Did you do your own testing before you got on?” or say “Well maybe you should volunteer your time to help us build airplanes.”

Well that’s covered by the UPPERCASE section at the end of the GPL. Such are the joys of open source software. But at least we can see what’s going on and tell the devs when they’re doing a half-assed job.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.