Quick one before I find a fix tomorrow or go into more detail here: I can’t update modules after migrating. DNS ok. No obvs issues and yum update ok. v16 to v16 migration to a new IP and new host, and I’ve checked all obvious things. It isn’t the cloud VM host or router, I can reach the mirrors proved by spinning up a new instance. I can ping the mirrors too. Cheers, K.
Thanks James. Still not working, I’m sure something is broken. I had no zend resets left on this instance and that was replaced with “2 Hardware Resets Remaining” following the upgrade from v15 to v16. The migration to a new server then took this down to “1 Hardware Reset Remaining”. This conflicts in the Sangoma Portal though > “Hardware lock cannot be reset, maximum attempts reached”. Could this be related to the issue?
This is happening on multiple new v16 deployments now using “old” unused activation IDs, with valid SysAdmin Pro licenses, Expired Endpoint Manager licenses. Can’t reach mirrors to update. What am I missing here? Does a bad/expired deployment ID, if there is such a thing, block an attempt to update somehow? The PBXs are activated ok. I don’t usually re-use old deployment IDs, but these are test PBXs and I need SysAdmin Pro, which hasn’t expired. This might be irrelevant but updating modules is always the first thing I do for new deployments after the firewall wizard, so logically I’m leaning towards what I’ve gone and done differently here.
I can confirm it is not a network issue. I deactivated and activated with a new deployment ID and can now reach the mirrors immediately. What is it checking for and why is it blocking a bunch of our old unused deployment IDs from performing updates?
Thinking out loud. Are these old unused deployment IDs expecting a different version to a fresh install of 16? They would have been version 14 when used last. It’s a bit of a ballache installing version 14 just to find out, if anyone knows please? I will if I have to but they’re just being wasted with no zend resets left the more that I try. I have re-used old deployment IDs loads of times in the past and many are still active with no issues, so it’s got to be something along these lines.