When two sockets send data to each other in blocking
mode, they can both wait until the other end-point's
buffers are free enough to receive the data being
sent. Since in LOOLWebSocket we lock both send and
receive with the same lock, this prevents the
reader thread from freeing the buffer while we try
to send data. But since our peer is in the same
dilemma, neither of us will make progress--deadlock.
Since sockets are full-duplex, they are capable of
handling two way communication concurrently. Poco
seems to not share data between them either, so
this seems safe.
Change-Id: I1fd68cd4fb3b4250b93c8f94cd42e49ee78f6650
Reviewed-on: https://gerrit.libreoffice.org/34194
Reviewed-by: Ashod Nakashian <ashnakash@gmail.com>
Tested-by: Ashod Nakashian <ashnakash@gmail.com>
Which is dangerous in general, but here the values are from
non-overlapping ranges. Make it a bit more explicit that this is not an
accident.
Change-Id: I56897473a755e28cd9e7f74ceacecbab2db5e829
Initial effort was setting it when we are compiling with
--enable-debug or when not, unit tests are running, but since we don't
run the unit tests in absence of --enable-debug anyways,
there is no use for this #else code block.
Change-Id: I9d7c66a18be160a47afd2bf336d89d50d4f2463e
Messages larger than a certain size are preambled
with a 'nextmessage' message that hold the size
of the subsequent message.
This is a workaround to a limitation the Poco
WebSocket API where if the buffer size is
smaller than the received frame the socket
ends up in a bad state and must be closed.
Unfortunately the new API that avoids this
workaround is not yet released by Poco.
Here we minimize the need for 'nextmessage'
to truely large messages. The limit is now
raised from above 1KB to over 63KB.
We may raise this limit further, but that will
cost each socket that much dedicated buffer size.
Change-Id: I01e4c68cdbe67e413c04a9725152224a87ab8267
Reviewed-on: https://gerrit.libreoffice.org/31286
Reviewed-by: Ashod Nakashian <ashnakash@gmail.com>
Tested-by: Ashod Nakashian <ashnakash@gmail.com>