a18a74d676
In drop cap layout, set smaller size for all glyphs positioned over the baseline, e.g. dashes (dash, en-dash, em-dash, figure dash), bullet, asterisks and quotation marks by extending the bounding box of the glyph to the baseline, like MSO does. Add "DropCapPunctuation", a new default compatibility option for this. Only old ODT files loads the old layout (which was partially broken: e.g. dashes were too long, often missing from the drop cap area or the drop cap was disabled). New ODT and imported DOCX documents use the new default layout for better typesetting and interoperability. Change-Id: I3aba0727fd15f6edb9245e31f523e12f407d189e Reviewed-on: https://gerrit.libreoffice.org/c/core/+/138356 Tested-by: László Németh <nemeth@numbertext.org> Reviewed-by: László Németh <nemeth@numbertext.org> |
||
---|---|---|
.. | ||
documentation | ||
inc | ||
qa | ||
source | ||
util | ||
CppunitTest_writerfilter_dmapper.mk | ||
CppunitTest_writerfilter_filters_test.mk | ||
CppunitTest_writerfilter_misc.mk | ||
CppunitTest_writerfilter_rtftok.mk | ||
CustomTarget_source.mk | ||
IwyuFilter_writerfilter.yaml | ||
Library_writerfilter.mk | ||
Makefile | ||
Module_writerfilter.mk | ||
README.md |
Import Filters for LibreOffice Writer
The writerfilter module contains import filters for Writer, using its UNO API.
Import filter for DOCX and RTF.
-
Module contents
documentation
: RNG schema for the OOXML tokenizer, etc.inc
: module-global headers (can be included by any files under source)qa
:cppunit
testssource
: the filters themselvesutil
: UNO passive registration config
-
Source contents
dmapper
: the domain mapper, hiding UNO from the tokenizers, used by DOCX and RTF import- The incoming traffic of
dmapper
can be dumped into an XML file in/tmp
indbgutil
builds, start soffice with theSW_DEBUG_WRITERFILTER=1
environment variable if you want that.
- The incoming traffic of
filter
: the UNO filter service implementations, invoked by UNO and calling the dmapper + one of the tokenizersooxml
: the docx tokenizerrtftok
: the rtf tokenizer