Problems applying patches

I recently have a problem applying patches on one of my systems. When I select the modules I want to update I get the message.

Downloading sysadmin Error(s) downloading sysadmin:
Unable to connect to servers from URLs provided: http://mirror1.freepbx.org/modules/packages/sysadmin/5.3/sysadmin-2.11.0.41.tgz?installid=b146ee7967fc95c2f6e269d2ff5cf280&brandid=freepbxdistro&astver=11.7.0&phpver=5.3.3&distro=freepbxdistro&distrover=4.211.64-10&fpbxver=2.11.0&ucount=55,http://mirror2.freepbx.org/modules/packages/sysadmin/5.3/sysadmin-2.11.0.41.tgz?installid=b146ee7967fc95c2f6e269d2ff5cf280&brandid=freepbxdistro&astver=11.7.0&phpver=5.3.3&distro=freepbxdistro&distrover=4.211.64-10&fpbxver=2.11.0&ucount=55

Talked with FreePBX support because I can’t also get a license code for this machine and it tells me to call. In working with them we determined that I could not ping either of the mirrors, but could ping other sites. So I opened a problem ticket with our VPS provider. They had me do some further diagnostics (which I should have thought of anyway)

[rreuscher@atlanta1 ~]$ sudo mtr -rwc1 mirror1.freePBX.org
HOST: atlanta1 Loss% Snt Last Avg Best Wrst StDev

  1. router2-atl.linode.com 0.0% 1 0.6 0.6 0.6 0.6 0.0
  2. 64.22.106.13 0.0% 1 0.6 0.6 0.6 0.6 0.0
  3. 209.51.130.213 0.0% 1 0.5 0.5 0.5 0.5 0.0
  4. 10gigabitethernet1-3.core1.atl1.he.net 0.0% 1 9.6 9.6 9.6 9.6 0.0
  5. 10ge10-4.core1.chi1.he.net 0.0% 1 33.8 33.8 33.8 33.8 0.0
  6. us-signal-company-llc.10gigabitethernet1-2.core1.chi1.he.net 0.0% 1 26.8 26.8 26.8 26.8 0.0
  7. host-70-34-130-109.host.ussignalcom.net 0.0% 1 32.8 32.8 32.8 32.8 0.0
  8. te3-0-1.chi.ussignalcom.net 0.0% 1 34.0 34.0 34.0 34.0 0.0
  9. te1-0-0.pe01.ind.ussignalcom.net 0.0% 1 32.6 32.6 32.6 32.6 0.0
  10. te0-0-0-0.agg01.ind.ussignalcom.net 0.0% 1 32.6 32.6 32.6 32.6 0.0
  11. host-184-175-155-194.host.ussignalcom.net 0.0% 1 32.3 32.3 32.3 32.3 0.0
  12. ??? 100.0 1 0.0 0.0 0.0 0.0 0.0
  13. net-74-207-211-74.arpa.fidelityaccess.net 0.0% 1 30.1 30.1 30.1 30.1 0.0
  14. ??? 100.0 1 0.0 0.0 0.0 0.0 0.0

[rreuscher@atlanta1 ~]$ sudo mtr -rwc1 mirror2.freePBX.org
HOST: atlanta1 Loss% Snt Last Avg Best Wrst StDev

  1. router2-atl.linode.com 0.0% 1 6.2 6.2 6.2 6.2 0.0
  2. 64.22.106.13 0.0% 1 127.2 127.2 127.2 127.2 0.0
  3. xe-8-2-3.edge4.Atlanta2.Level3.net 0.0% 1 5.2 5.2 5.2 5.2 0.0
  4. vlan52.ebr2.Atlanta2.Level3.net 0.0% 1 25.5 25.5 25.5 25.5 0.0
  5. ae-3-3.ebr2.Chicago1.Level3.net 0.0% 1 21.0 21.0 21.0 21.0 0.0
  6. ae-5-5.ebr2.Chicago2.Level3.net 0.0% 1 19.7 19.7 19.7 19.7 0.0
  7. ae-201-3601.edge2.Chicago2.Level3.net 0.0% 1 19.6 19.6 19.6 19.6 0.0
  8. WISCONSIN-C.edge2.Chicago2.Level3.net 0.0% 1 22.6 22.6 22.6 22.6 0.0
  9. static.66.185.29.241.cyberlynk.net 0.0% 1 22.8 22.8 22.8 22.8 0.0
  10. ??? 100.0 1 0.0 0.0 0.0 0.0 0.0

This clearly shows that we are leaving the provider and getting lost somewhere else in route.

What went on when I first encountered the problem. I got a notice that one of the modules has a vulnerability, so I was going to apply updates to my systems. (all of this is new, we just put the new external VPS FreePBX servers in place in late Novemeber, and I haven’t applied maintenance yet as we have been working on other issues since them and have developed a maintenance time schedule for applying updates).

In researching the venerability, and all the updates I noticed that the “firmware” was at 4.211.64-7 and the current release is 4.211.64-10. So I decided to get it all done at once. I have three systems, a production system, a hot standby system, and test system. I downloaded and applied the firmware updates to the test system, and then updated any remaining modules that showed to have updates. The test system worked fine. The test system is not in VPS environment - at least not one we pay for, it’s running the a VM on my work Mac internal to the company. I went to do the same thing on the backup system, applied the firmware updates successfully, but now when I attempt to update any modules that have updates (which there are a few, especially since it’s been more than a week since I’ve had the problem) I now get this error and can’t get any updates. It tells me that there are some, and I’ve been getting my daily emails about there being some, in which the list has grown slightly since the first time I experienced the problem. So somewhere it’s talking to something to get that information.

The test system can apply updates, I just did it today. But not the backup system. I’m loath to try the production system, I don’t want to get it in the same state (they are running in different data centers with the hosting company, but neither can successfully ping either mirror).

I lost as to see what could be causing this issue, the hosting company says we’re getting out so it’s not their problem, schmooze says we don’t see you so it’s not our problem. Suggestions on where to look and what to look for to determine the problem?

Ok, a couple of thinks. The mirror’s don’t respond to pings so that is normal.

Also your path traces show you are hitting the firewall. Everything behind the firewall is hidden, as it should be.

BTW this is not rambling, I own the company that hosts mirror1

Do you have a firewall that is blocking the download?

OK, I went ahead and tried to update the module with the vunerability on our production system, and that did work. I was able to download the update and apply it.

The only visibility I have is to the software firewall on the VPS. Both systems have exactly the same entries, in iptables so I don’t think that is it.

At least I have vunerability fixed on both systems now, just need to figure out why I can’t update my backup system.

The back up system was cloned from the production system, the license info was deleted per your doc instructions, and the some trunking definitions were changed to allow both systems to run active all the time so that they both weren’t trying to get to the same connection with our SIP provider. I have in the past put lots of maintenance on the production system prior to its implementation, but since implementation I have not, nor have I put any maintenance on the backup system until just recently. The test system was built from scratch using your distro iso since it’s not running on at one of our VPS sites. Perhaps something else needed to be changed after the clone, everything other than maintenance has seemed to be working fine.

Any ideas on what I can look at to track down the problem. I’d like to be able to update my system again and I don’t want to apply the firmware updates to my production system if they are going to cause the same problem, i.e. put me in a unmaintainable state.

Whos is your VPS provider?

Linode.

I’m not sure that’s relevant. Both FreePBX systems are running on different linodes in different data centers. I have in the past been able to successfully install firmware updates and module updates on our production system. That was before it went production and we only had the one. I was keeping it up to date daily as I was getting ready for the cutover. Since it went production, I’ve cloned it to the backup system and have not put any maintenance on to keep them in sync.

The plan has to be to develop a maintenance schedule which includes installing updates on the test system (running internally on my Mac via Parallels) just for basic functionality (I put one fix on that broke the guy, which I was able to fix with some help but don’t want to do that again on either the production or backup system so first goes to test for any glaring issue). Then to backup for functional testing since it has access to our SIP provider and everyone has an extension on that system for a day or two, then to production off hours.

I’ve done step one (test) and did step 2 (backup), but now I can’t apply module maintenance from the backup system. I have been successfully able to apply module maintenance from the production system (I went ahead an applied just the fix for the vulnerability on the production system and was successful so I know that system can still get to the mirrors to download maint.

Something must have happened to the backup system during the firmware update which is preventing it from getting to the mirrors.

This is what I’m trying to figure out what got hosed.