After upgrade to the new FreePBX 13 i’m getting this error:
Reload failed because retrieve_conf encountered an
exit: 255
Unable to continue. SQLSTATE[42S22]: Column not found: 1054 Unknown column ‘auth’ in ‘where clause’ in /var/www/html/admin/modules/userman/functions.inc/auth/Auth.php on line 314
Cyrillic letters would need the database to be in UTF-8 which is only available for new installs of FreePBX 13.
see:
That said I was able to convert my database to UTF-8 by doing this:
but this is unsupported and might not work as well for you as it did for me…
If you want to try this make a backup first…
If you had somehow managed to type in Cyrillic letters before the upgrade (if your database was in ISO-8859-1 like mine I doubt you did since it doesn’t support Cyrillic) they will have been corrupted and you will need to retype those entries…
I tested my database using both Cyrillic, French and Romanian letters and everything seemed A-OK… I cannot say I tested all of the screens with all of those alphabets but I made sure my database post-conversion supported characters which were already supported by ISO-8859-1 (like French) or needed either their own character set or UTF-8 (like Cyrillic and Romanian).
The problem with the apostrophe I mentioned (and other similar issues) where later fixed by module updates.
I had a (different) problem which later required me to restore my database but I restored from a backup of that converted database and everything is still working perfectly.
I just tried again after reading your message to type in a few Cyrillic letters and it definitely seems to work perfectly…
Actually I did nothing to have cyrillic characters working perfectly in Freepbx 12.
Thank you Marbled, I understand your suggestions but can’t connect to mysql remotely using navicat to run your scripts (but have access with ssh). Could you be more specific with establishing remote access to the server or mysql’s commands execution via ssh.
Yes, I’m using FreePBX distro
I reran suggested scripts via ssh, reboot the server and it worked for me: cyrillic is OK in GUI.
But CDR module still show unreadable characters both in old and new records.
It looks like there’s a problem with displaying UTF-8 data from those tables…
I am not sure why I didn’t spot it before…
The data is properly encoded but it tries to display it as ISO-8859-1 as far as I can tell…
Andrew, even if Asterisk itself doesn’t quite comprehend this couldn’t you for this to be shown as UTF-8? The data seems to be there and properly encoded but it’s processed as ISO–8859-1 apparently…
OK, what seems to happen is that they are encoded twice or something similar…
I tried the following character “ș” which is in Romanian and either needs UTF-8 or ISO-8859-16… (I could have tried Cyrillic but they were still some test entries left with that Romanian character and it was easier to start from there…)
In UTF-8 this character is encoded 0xC8 0x99 and this is what I see in the other tables.
In the CDR one though I see 0xC3 0x88 0xE2 0x84 0xA2…
What it seems to do is think that 0xC8 0x99 are two different characters and that they are in ISO-8859-1…
0xC8 = È
0x99 = ??? That’s not valid ISO-8859-1, that’s actually Windows 1252 which is a non-standard superset of ISO-8859-1 invented by Microsoft… It represents the character ™.
That È is later converted from ISO-8859-1 to UTF-8 so it gives 0xC3 0x88.
That ™ is later converted from ISO-8859-1 to UTF-8 so it gives 0xE2 0x84 0xA2.
So that explains that weird 0xC3 0x88 0xE2 0x84 0xA2 sequence instead of 0xC8 0x99… The string is actually encoded twice…
As to where this is done, I can’t say…
A workaround I could see is that if the Cyrillic entries could somehow be transliterated (converted from Cyrillic to latin characters) before hitting the CDR db that could be a way to deal with this but I do not know if it’s doable and there is actually more than one way to transliterate (and it’s most likely different from one language that uses Cyrillic to the other and there are more than one language that use Cyrillic).
Andrew, anything that comes to mind as to the reason of this double encoding?
For cdr entries that is all asterisk. Freepbx does nothing. If asterisk is inputting bad data then it is as I previously stated. Asterisk does not support utf8
That commit doesn’t mean utf8 works. I just means if the table is set to utf8 asterisk won’t crash.
Unless you can prove otherwise I still think asterisk has issues with utf8. If it doesn’t then great but remember that freepbx is not a middle man in CDRs. Asterisk writes directly to the database with freepbx.
I was told by an Asterisk dev that it’s not supported by them but certain combination of databases in certain conditions could support UTF-8 but that they made no effort to make this work (I asked around on their IRC channel earlier today but went to sleep after).
I wonder if keeping the database in ISO-8859-1/Latin 1 while at the same time somehow treating what is inside the database as something else (which older FreePBX somehow did) might not be what worked out in the past…
It looks like it’s having the database in UTF-8 that could be the cause of that reencoding and that Asterisk dev was pretty sure they weren’t doing any character manipulation of any kind…
I can see possible solutions to this problem but they all require some dev of some kind I believe…