Still need to figure out how to ask the user where to save the
documemnt and under what name when closing it.
Or actually, should ask right away, as iOS apps are supposed to be
crash-proof, there shouldn't be any need for any separate "save" or
"close" operation by the user, right?
Change-Id: I6d6b9933f5e21f7793837c7ed65049b82853a183
It turns out that the view of the DocumentViewController object is
removed from the view hierarchy when the camera is displayed, and
re-added after you choose to use the taken photo. Thus the
viewWillAppear: method is called again at that stage. The Document
object is stil quite intact, though. We should not call the Document
object's openWithCompletionHandler: method again, as that will cause
horrible brokenness.
Change-Id: Ib79bd8f292b01a19866278c4d95a2e816dcd9235
Even if the LO core code, as called by the Online code, already has
saved the document back to the file from which it was loaded, in order
for a file provider extension like NextCloud to notice that, it is
essential to call saveToURL:forSaveOperation:completionHandler:. The
contentsForType:error: method can just return a NSFileWrapper for the
same, already saved, file, though.
Change-Id: Ic063c8603ca38930083866d973e500336cad517e
To get that with CoreGraphics on iOS we need to use also
kCGImageByteOrder32Little in the CGBitmapContextCreate() call,
otherwise the bytes will be in ARGB order in memory.
Also, yes, we do need to turn the coordinate system upside-down from
the top left corner.
On iOS it shouldn't actually be pointer to a pixel char buffer, but a
craphics context reference. (This is how it has been since the
experimental TiledLibreOffice app, maybe five years ago? Sadly it
wasn't documented in the LibreOfficeKit include file. But it is how
LibreOfficeLight used the API, too.)
In TiledLibreOffice we rendered tiles directly into the CALayer of the
view. In this Online-based app we of course do render tiles into pixel
char buffers, just like in real Online, but we need to create bitmap
graphics contexts for them and pass that to paintTile().
Now I get white tiles, not totally zero-filled ones. But still no
document contents rendered.
I don't yet want to change the pBuffer parameter to actually be a
buffer pointer on iOS, too, like on other platforms. Also, changing it
will mean the LibreOfficeLight app would need changing, too, and I
don't feel like doing that. But ideally, sure, that should be done.
Re-think the plumbing between the different parts of the C++ Online
code. Do try to have it work more like in real Online on all but the
lowest socket level. Except that we don't have multiple processes, but
threads inside the same process. And instead of using actual system
sockets for WebSocket traffic between the threads, we use our own
FakeSocket things, with no WebSocket framing of messages.
Reduce the amount of #ifdef MOBILEAPP a bit also by compiling in the
UnitFoo things. Hardcode that so that no unit testing is ever
attempted, though. We don't try to dlopen any library.
Corresponding changes in the app Objective-C code. Plus fixes and
functionality improvements.
Now it gets so far that the JavaScript code thinks it has the document
tiles presented, and doesn't crash. But it hangs occasionally. And all
tiles show up blank.
Anyway, progress.
Change-Id: I769497c9a46ddb74984bc7af36d132b7b43895d4
The app is unimaginatively called "Mobile" for now.
Runs but crashes pretty quickly after loading the document by the LO
core. Will need some heavy changes to get a ClientSession object
created in there, too, to handle the (emulated) WebSocket messages
from the JavaScript. It would then handle some of these messages
itself, and forwards some to the ChildSession, which in this case is
in the same process. Now the messsages from the JavaScript go to a
ChildSession, which is wrong. As the assertion says, "Tile traffic
should go through the DocumentBroker-LoKit WS"