Shared voicemail across Primary and Warm Spare PBX's using Amazon S3 bucket

I’ve got Yealink phones that register to a primary PBX and Warm spare. If voicemail ends up on the warm spare, the Yealinks will show the new voicemail alert, and increment the voicemail count, but when you press the message button to check it, the voicemail can’t be retrieved since the check voicemail call takes place on the primary.

To solve this, I figured I would mount an S3 bucket to /var/spool/asterisk/voicemail/default using s3fs. I tried this, but FreePBX wouldn’t recognize the mount. All voicemail showed as inactive in Voicemail admin. So, as a workaround I have a S3 bucket mounted on a different folder via s3fs on the primary and warm spare servers. Then, I use Unison to sync those mounts with /var/spool/asterisk/voicemail/default. Finally, I run unison to sync the mount and voicemail folder every 1 min on both PBXs. It’s a little ugly, and it only syncs every 1-2 minutes, so there is a small window where the 2 PBXs can get out of sync and cause errors when the cronjob runs Unison.

Ideally, I could mount an S3 bucket directly to the voicemail folder and do away with unison and the cronjob. Putting this out there in the event anyone else has solved this, or if the folks and Sangoma have any input.

I’d be happy to share my solution as well, but since it’s a bit brittle for the aforementioned reasons, I’m not sure it’s anyone else’s cup of tea.

I would be remounting the whole of /var/spool/voicemail to catch the symlinks

Yeah, no, you don’t want to mount S3 directly to spool directories…s3fs is very slow. However, if your PBX instances are on AWS you CAN use AWS EFS and mount THAT volume to use for VM sync between systems. That is an actual NFS compatible file system; S3 is not. The caveat is EFS is not accessible outside of the AWS VPC for obvious security reasons, so if you are self-hosting or hosting the PBX elsewhere, EFS isn’t an option.

Your best bet in that case is still to use Unison to sync a separate mount point because you will quickly find that s3fs will just bottleneck asterisk right into the ground if you try to read and write files to S3 in real time like you want to do. This is especially true of write operations. S3 was intended for archival and CDN style distribution…so performance leans on the read end of things at the expense of writes, which have to propagate, thereby limiting quick read-back and validation of newly written data.

The nice thing about Unison is that it doesn’t mind being run in a constant loop and can be quite quick with the right options set. We use it in our AWS FreePBX HA Cluster solution to sync up via EFS - because, while even EFS is capable of bottlenecks on a super large deployments, but it’s a LOT faster than S3. We tried S3 to allow for cross-Region HA, but it was a disaster in real world testing with all the bottlenecks…so our HA solution is multi-AZ but single-Region using EFS, instead.

You can make Unison sync much faster if you use the -fastcheck flag. Since voicemail files don’t constantly change in tiny near-indistinguishable ways like a text file might, this will reduce the amount of deep checking for changes (full md5sums and the like) that Unison would do by default.

Thanks @dicko I’ll do that. The warm spare syncs every 24 hours so I figured that would be good enough to take care of the device folder, but I don’t really know what the device folder does. It seems to have a subset of the folders that are in the default folder.

Thanks @TheWebMachine. I ended up ditching S3 and using unison solely to sync /var/spool/asterisk/voicemail/default between the warm spare over ssh, and a cronjob to run a script on the primary that invokes unison if it isn’t already running every minute. Working like a charm so far.

They are symlinks, symlinks can’t be on two seperate filesystems, the are there for ‘user and extension mode’ but I believe that might be underlying to some other addons, (they take up no extra space, so that’s one less thing to worry about if the FileSystem supports symlinks)

1 Like

If you mount the whole of /var/spool/asterisk, (well, even if you don’t) you should look into setting in /etc/asterisk/asterisk.conf

cache_record_files = yes
record_cache_dir = whatevermakesenselocally

It will keep everything hunkydory on ‘slow’ file systems, ( even when your mount disappears for a while :slight_smile: )

1 Like

As a “legacy” option - introduced in v1.4, I do believe - intended for the computing days of yesteryear where fast storage and buses were super expensive (and still pretty slow by today’s standards), I’d be hesitant to use it nowadays. You’re just asking Asterisk to do more work (double? since it then has to move files from cache to the S3fs mountpoint eventually) beyond actually handling your calls…potentially using macros and libraries that may or may not have been updated over the years as storage mediums and networks got faster and cheaper and this feature started to fall to the wayside.

Using the default local storage location as your “cache” paired with a utility actually designed to do the syncing just makes more sense from a performance management perspective.

Try it, it will save your ass one day. It is simply designed to cache data should the remote file system go stale or disappear. (network failures are unfortunately as yet 'not a ‘legacy’ option)

1 Like

This topic was automatically closed 31 days after the last reply. New replies are no longer allowed.