Tiles no longer hog the queue ahead of all else.
We now give priority to callback events, so clients
get to know the document state sooner.
Since tiles take long to render, an equal time
is given to non-tiles (capped at 100ms).
Finally, Impress preview tiles are given
the lowest priority and rendered only when
the queue is drained.
Change-Id: I922c1e11200e5675f50d86b83baee1588cbbf66f
Reviewed-on: https://gerrit.libreoffice.org/31394
Reviewed-by: Ashod Nakashian <ashnakash@gmail.com>
Tested-by: Ashod Nakashian <ashnakash@gmail.com>
When we the pipe with wsd is closed we
assume wsd has died and we terminate too.
WSD can fork us anew, if it's still alive.
Change-Id: I669ed717db973b50498a6bc08e1fca59c6563443
Reviewed-on: https://gerrit.libreoffice.org/31337
Reviewed-by: Ashod Nakashian <ashnakash@gmail.com>
Tested-by: Ashod Nakashian <ashnakash@gmail.com>
Setting 'n = -1;' helps to detect where the failure happened
when receiveFrame throws. At the bottom of the function we log
partially processed data by checking n (among others).
This reverts commit 752372a2b0.
Change-Id: I3294329c3d95b38d72c3fc824ab2eb7f2339adee
Reviewed-on: https://gerrit.libreoffice.org/31339
Reviewed-by: Ashod Nakashian <ashnakash@gmail.com>
Tested-by: Ashod Nakashian <ashnakash@gmail.com>
We need a fast and good (high avalanche properties)
hash function for the png caching to avoid collissions
(even in the very limited samples we have, since tiles
are likely to have patters, such as all 0's and all 1's
etc.).
Bob Jenkins's public domain SpookyV2 is used here.
It has great avalanache properties and is fast at
~3-bytes / cycle for large messages.
Only trailing whitespace was removed from original
sources and 4 tabs converted to spaces.
Change-Id: Ife57237321625c836d85c894d939fd04a8f577bb
Reviewed-on: https://gerrit.libreoffice.org/31292
Reviewed-by: Ashod Nakashian <ashnakash@gmail.com>
Tested-by: Ashod Nakashian <ashnakash@gmail.com>
When receiving large messages the dynamic socket buffer
is resized to accomodate the incoming large message.
This buffer was previously never reduced in size.
This is an obvious leak that is now avoided.
When the buffer grows beyond quadruple the default
size, it is shrunk back to the default. This gives
a decent balance between memory waste and unnecessary
resizing up and down after each large message received.
Change-Id: Ic3996c80e96592af6f12c4abd1dd737bdc07b814
Reviewed-on: https://gerrit.libreoffice.org/31287
Reviewed-by: Ashod Nakashian <ashnakash@gmail.com>
Tested-by: Ashod Nakashian <ashnakash@gmail.com>
Messages larger than a certain size are preambled
with a 'nextmessage' message that hold the size
of the subsequent message.
This is a workaround to a limitation the Poco
WebSocket API where if the buffer size is
smaller than the received frame the socket
ends up in a bad state and must be closed.
Unfortunately the new API that avoids this
workaround is not yet released by Poco.
Here we minimize the need for 'nextmessage'
to truely large messages. The limit is now
raised from above 1KB to over 63KB.
We may raise this limit further, but that will
cost each socket that much dedicated buffer size.
Change-Id: I01e4c68cdbe67e413c04a9725152224a87ab8267
Reviewed-on: https://gerrit.libreoffice.org/31286
Reviewed-by: Ashod Nakashian <ashnakash@gmail.com>
Tested-by: Ashod Nakashian <ashnakash@gmail.com>