Automation with FreePBX/Softphone

Does sangoma connect integrate with Microsoft Power Automate, zappier, or make.com?

I want to use my PBxact system and sipstation, but I want to have a softphone API that can connect to Microsoft power automate so I can make automatic outbound calls and inbound calls with AI voice recognition. Twilio has this service, but I want to stay with sipstation.
My Business needs to use AI with the phones now. For FreePBX to stay relevant for the far future, an API that connects with these automation platforms is needed.
I found some of what I am asking for in Sangoma CX 7.4, but there is very little information available, I guess I have to do a sales call, but if no video of actual features, then I doubt it works well as it would be something to show off.

Please lead me in the direction of using my sangoma hardware to make this happen.

Without a specific description of the use cases and the functionality you are looking for I can only guess here but this definitely sounds like something that wouldn’t be a softphone thing but rather something that would need to be implemented at the asterisk or possibly FreePBX side.

Which would mean either custom contributions by you (or somebody that needs similar functionality) to the asterisk or FreePBX projects or as a feature request that will be taken into consideration by the development team.

Bringing automation to asterisk needs to be possible, has anyone done this? synthflow.ai is the automated system I would like to use or replicate with my local hosted pbxact system. I would like to use Jan.ai to interact with the person. synthflow.ai works with twilo and gpt-4. It is easy to set up, but i would like to know if anyone uses this with their freepbx or even asterisk based system? Maybe the call can be forwarded if not answered in 5 rings to this synthflow.ai system, that is what I am going to do now.

There are various APIs. AMI is probably the most relevant. What is less likely to exist is the integration to Microsoft proprietary protocol, as open source developers tend not to frequent the Microsoft world and Microsoft generally doesn’t volunteer back ends.

If Power Automate supports Microsoft’s legacy API, TAPI, there have been, and probably still are, TAPI adaptors for AMI. You may need to understand some of what goes on under FreePBX’s hood, as AMI works in Asterisk terms.

tell me more about this AMI integration API. do you use it, where can I go for more information? any other suggestions out there on how to bring automation to FreePBX?

https://docs.asterisk.org/Configuration/Interfaces/Asterisk-Manager-Interface-AMI/

and following pages.

http://asteriskdocs.org/en/3rd_Edition/asterisk-book-html-chunk/asterisk-AMI.html

for older, but still generally valid information.

Remember not to confuse the questions “can it be done” and "can I do it ". The answer to the first is almost always yes. The answer to the second is more complicated. Can these integrations be done? Yes. Are they baked in currently to the Sangoma open source PBX offerings? No. Can you pay someone or write them? That is between you, your skill set and your accountant.

Anything with an official or unofficial API can be generally tied in via countless methods. Basically if Linux can do it asterisk can do it by way of Linux.

the first link is dead, second link has a lot of code, but no examples of direct implementation. Maybe no one on this forum has done this phone automation before.

not hypothetically asking if software can be written to do this. I want to know if anyone on this forum has done phone automation before with their asterisk system, and any overview of what it can do, and resources to follow to implement it on my own. perhaps this FreePBX forum is only for simple phone systems and will never discuss integration with synthflow.ai or even a system similar to synthflow or if they forward calls to synthflow from asterisk. I intend to just forward afterhours calls or voicemail calls to synthflow to then handle, and keep working on that integration.

It comes across as you wanting people to do all the work for you including the research. People above literally gave links. It is open source people do all sorts of integration or pay to have it done. Start writing and testing with the info provided so far and when you have done something and it doesn’t work come get feedback and help. This is a very knowledgeable and helpful community but the responses you get are directly linked to the quality of the question. This isn’t a Google proxy and “let me Google that for you” links are frowned upon so people with the knowledge will generally avoid or ignore no/low effort posts.

Has anyone done X… Probably but what does that have to do with the price of Bitcoin on a Saturday.

I tried to do X and it did Y, what may be wrong. That gets answers.

The formula is try and help others help you.

3 Likes

I would characterize his response as exactly correct.

1 Like

I’m responsible for much of the code used by all of the people in this community both paid and unpaid. My open source work speaks for itself. I don’t actually take on paod work personally. I only do work for my employer. Anything else I do is with the condition it goes back to the community or open source. Feel free to carry on though. Not that any of my other advice was taken but I’ll offer a touch more. Be less adversarial.

1 Like

The first link is live, albeit it is to a title page, and you need to look at the following pages for the details.

If I had specific examples, you would still need to understand the basics in order to adapt them to your use case.

I reviewed the AMI documentation, there is a lot to take in. I would like to see about working on a small project at a time utilizing it. But as a script kiddie, I can only get so far. On the official Youtube Asterisk channel, they demonstrated integration with a transcription service Call recording & transcription - Threads Not sure if a locally hosted version of this exists. at asteriskCon they talk about and pseudo-demonstrate test ideas. I think most FreePBX like locally hosted hardware. I have 2 PBexact uc 40 units, I want to keep the money in the community and not put it into cloud companies. Maybe Sangoma or other outside companies/groups can have goFundMe development projects for locally hosted AI integration with FreePBX. If anyone knows of another resource that shows implementation like this please post.

I don’t know if this is exactly what you want, but I made some custom scripts in Python that work using the Microsoft Speech Studio API, serving with a TTS Engine, almost like an “AI”

I use it in two scenarios:

  • An automatic schedule confirmation (FreePBX takes the appointments by calling the API used in the customer’s system, stores them in their bank and calls them - when they answer, it plays an audio automatically generated by Azure and offers an IVR to confirm the appointment/unschedule)

  • An IVR that asks for the customer’s name if the number is not saved in the FOP2 contacts, records the audio, sends it to Azure STT, takes the name and saves the customer’s contact in the FOP2 calendar, in addition to defining the callerid name
    The next time the customer calls, the IVR speaks their name when answered
    “Hi (NAME), welcome to enterpise corporation inc”

This “automation” was done largely using ODBC functions, python scripts and custom Asterisk dialplans, I can share it if you are interested

I think many people would find that useful how you got that setup. To reduce any delay in speech recognition, have you found a way to build all local setup without azure or cloud? Like if there was a local hosted LLM (on fast computer) that had its own API key that could be used. Like Jan.ai or rtx chat. or a way that you can ask a verbal question, and then the local LLM would respond speech to text/text to speech.

PB&J

Anyone here implement Vosk ASR VOSK Offline Speech Recognition API
I have a PBxact system, so cannot do real time voice recognition on that hardware, not sure if the voice stream can be sent to a local based computer running this vosk asr, or even use open ai’s whisper.ai

I can run whisper AI on a local machine, but that just reads a wav or audio file. How do I send the asterisk call to whisper AI?
has anyone here done that before?

It is trivial to use either locally on hardware (or in a docker) I use one instance in the cloud for many FreePBI as it likes lots of memory but uses very little bandwidth or cpu cycles, if you can contrive to compile

res_speech_vosk.so

against the asterisk you are running, you will need the headers and compile flags used. you should ask the vendor for the flags but it might be simpler to compile your own asterisk