tdf#164006: Only use original word's positions, ignore extra encoded length
The encoding of the string passed to Hunspell/hyphen service depends on the encoding of the dictionary itself. When the usual UTF-8 encoding is used, the resulting octet string may be longer than the original UTF-16 code unit count. In that case, the length of the buffer receiving the positions will be longer, respectively. But on return, the buffer will only contain data in positions corresponding to the characters, not code units (it is unclear if we even need to pass buffer that large). So just as the following loop only iterates up to nWord length, the calculation of hyphen count must use its length, too, not the length of encWord. I suspect that the use of UTF-16 code units as hyphen positions is wrong; it will break in SMP surrogate pairs. The proper would be to iterate code points. However, I don't have data to test, so let it be TODO/LATER. Change-Id: Ieed5e696e03cb22e3b48fabc14537372bbe74363 Reviewed-on: https://gerrit.libreoffice.org/c/core/+/177077 Reviewed-by: Mike Kaganski <mike.kaganski@collabora.com> Tested-by: Jenkins
This commit is contained in:
parent
2909196239
commit
9c14ec81b6
1 changed files with 2 additions and 1 deletions
|
@ -785,7 +785,8 @@ Reference< XPossibleHyphens > SAL_CALL Hyphenator::createPossibleHyphens( const
|
|||
|
||||
sal_Int32 nHyphCount = 0;
|
||||
|
||||
for ( sal_Int32 i = 0; i < encWord.getLength(); i++)
|
||||
// FIXME: shouldn't we iterate code points instead?
|
||||
for (sal_Int32 i = 0; i < nWord.getLength(); i++)
|
||||
{
|
||||
if (hyphens[i]&1)
|
||||
nHyphCount++;
|
||||
|
|
Loading…
Reference in a new issue