lzma_filters_free() sets the options to NULL and ids to
LZMA_VLI_UNKNOWN so there is no need to do it by caller;
the filter arrays will always be left in a safe state.
Also use memcpy() instead of a loop to copy a filter chain
when it is known to be safe to copy LZMA_FILTERS_MAX + 1
(even if the elements past the terminator might be uninitialized).
This time it can happen when lzma_stream_encoder_mt() is used
to reinitialize an existing multi-threaded Stream encoder
and one of 1-4 tiny allocations in lzma_filters_copy() fail.
It's very similar to the previous bug
10430fbf38, happening with
an array of lzma_filter structures whose old options are freed
but the replacement never arrives due to a memory allocation
failure in lzma_filters_copy().
The documentation mentions that lzma_block_encoder() supports
LZMA_SYNC_FLUSH but it was never added to supported_actions[]
in the internal structure. Because of this, LZMA_SYNC_FLUSH could
not be used with the Block encoder unless it was the next coder
after something like stream_encoder() or stream_encoder_mt().
The bug was in the single-threaded .xz Stream encoder
in the code that is used for both re-initialization and for
lzma_filters_update(). To trigger it, an application had
to either re-initialize an existing encoder instance with
lzma_stream_encoder() or use lzma_filters_update(), and
then one of the 1-4 tiny allocations in lzma_filters_copy()
(called from stream_encoder_update()) must fail. An error
was correctly reported but the encoder state was corrupted.
This is related to the recent fix in
f8ee61e74e which is good but
it wasn't enough to fix the main problem in stream_encoder.c.
The encoder doesn't support dictionary sizes larger than 1536 MiB.
This is validated, for example, when calculating the memory usage
via lzma_raw_encoder_memusage(). It is also enforced by the LZ
part of the encoder initialization. However, LZMA encoder with
LZMA_MODE_NORMAL did an unsafe calculation with dict_size before
such validation and that results in an infinite loop if dict_size
was 2 << 30 or greater.
This reverts commit 177bdc922c
and also does equivalent change to arm64.c.
Now that ARM64 filter will use lzma_options_bcj, this change
is not needed anymore.
It also works on E2K as it supports these intrinsics.
On x86-64 runtime detection is used so the code keeps working on
older processors too. A CLMUL-only build can be done by using
-msse4.1 -mpclmul in CFLAGS and this will reduce the library
size since the generic implementation and its 8 KiB lookup table
will be omitted.
On 32-bit x86 this isn't used by default for now because by default
on 32-bit x86 the separate assembly file crc64_x86.S is used.
If --disable-assembler is used then this new CLMUL code is used
the same way as on 64-bit x86. However, a CLMUL-only build
(-msse4.1 -mpclmul) won't omit the 8 KiB lookup table on
32-bit x86 due to a currently-missing check for disabled
assembler usage.
The configure.ac check should be such that the code won't be
built if something in the toolchain doesn't support it but
--disable-clmul-crc option can be used to unconditionally
disable this feature.
CLMUL speeds up decompression of files that have compressed very
well (assuming CRC64 is used as a check type). It is know that
the CLMUL code is significantly slower than the generic code for
tiny inputs (especially 1-8 bytes but up to 16 bytes). If that
is a real-world problem then there is already a commented-out
variant that uses the generic version for small inputs.
Thanks to Ilya Kurdyukov for the original patch which was
derived from a white paper from Intel [1] (published in 2009)
and public domain code from [2] (released in 2016).
[1] https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
[2] https://github.com/rawrunprotected/crc
This uses it for CRC table initializations when using --disable-small.
It avoids mythread_once() overhead. It also means that then
--disable-small --disable-threads is thread-safe if this attribute
is supported.
__SSE2__ is the correct macro for SSE2 support with GCC, Clang,
and ICC. __SSE2_MATH__ means doing floating point math with SSE2
instead of 387. Often the latter macro is defined if the first
one is but it was still a bug.
In practice this means making the scripts work when
the input files have an unsupported check type which
isn't a problem in practice unless support for
some check types has been disabled at build time.
Modern 32-bit ARM in big endian mode use little endian for
instruction encoding still, so the filters work on such
executables too. It's likely less confusing for users this way.
The --arm64 option hasn't been implemented yet (there is
--experimental-arm64 but it's different). The --arm64 option
is added now anyway because this is the likely result and the
strings need to be ready for translators.
Thanks to Jia Tan.
If configured with --disable-lzip-decoder then --long-help will
still list `lzip' in --format but I left it like that since
due to translations it would be messy to have two help strings.
Features are disabled only in special situations so wrong help
in such a situation shouldn't matter much.
Thanks to Michał Górny for the original patch.
Support for format version 0 was removed from lzip 1.18 for some
reason. .lz format version 0 files are rare (and old) but some
source packages were released in this format, and some people might
have personal files in this format too. It's very little extra code
to support it along side format version 1 so this commits adds
support for both.
The Sync Flush marker extentension to the original .lz format
version 1 isn't supported. It would require changes to the
LZMA decoder itself. Such files are very rare anyway.
See the API doc for lzma_lzip_decoder() for more details about
the .lz format support.
Thanks to Michał Górny for the original patch.
"xz -v < regular_file > out.xz" doesn't display the percentage
and estimated remaining time because it doesn't even try to
check the input file size when input is read from stdin.
This could be improved but for now there's just a comment
to remind about it.
It worked for one input file since the counters are zero when
xz starts but they weren't reset when starting a new file in
passthru mode. For example, if files A, B, and C are one byte each,
then "xz -dcvf A B C" would show file sizes as 1, 2, and 3 bytes
instead of 1, 1, and 1 byte.
This affects lzma_memusage() and lzma_memlimit_set() when used
with the threaded decompressor. Now all allocations are reported
by lzma_memusage() (so it's not misleading) and lzma_memlimit_set()
cannot lower the limit below that value.
The alternative would have been to allow lowering the limit if
doing so is possible by freeing the cached memory but since
the primary use case of lzma_memlimit_set() is to increase
memlimit after LZMA_MEMLIMIT_ERROR this simple approach
was selected.
The cached memory was always included when enforcing
the memory usage limit while decoding.
Thanks to Jia Tan.
Don't call InitOnceComplete() if initialization was already done.
So far mythread_once() has been needed only when building
with --enable-small. windows/build.bash does this together
with --disable-threads so the Vista-specific mythread_once()
is never needed by those builds. VS project files or
CMake-builds don't support HAVE_SMALL builds at all.
Example:
$ xz -dc --single-stream good-0-empty.xz
xz: good-0-empty.xz: Internal error (bug)
The code, that is tries to catch some input file issues early,
didn't anticipate LZMA_STREAM_END which is possible in that
code only when --single-stream is used.
Now files with unsupported check will make xz display
a warning, set the exit status to 2 (unless --no-warn is used),
and then decompress the file normally. This is how it was
supposed to work since the beginning but this was broken by
the commit 231c3c7098, that is,
a little before 5.0.0 was released. The buggy behavior displayed
a message, set exit status 1 (error), and xz didn't attempt to
to decompress the file.
This doesn't matter today except for special builds that disable
CRC64 or SHA-256 at build time (but such builds should be used
in special situations only). The bug matters if new check type
is added in the future and an old xz version is used to decompress
such a file; however, it's likely that such files would use a new
filter too and an old xz wouldn't be able to decompress the file
anyway.
The first hunk in the commit is the actual fix. The second hunk
is a cleanup since LZMA_TELL_ANY_CHECK isn't used in xz.
There is a test file for unsupported check type but it wasn't
used by test_files.sh, perhaps due to different behavior between
xz and the simpler xzdec.
Treating it as a warning (message + exit status 2) matches gzip
and it seems more logical as at that point the output file has
already been successfully closed. When it's a warning it is
possible to suppress it with --no-warn.
On OpenBSD the number of cores online is often less
than what HW_NCPU would return because OpenBSD disables
simultaneous multi-threading (SMT) by default.
Thanks to Christian Weisgerber.
When encoders were disabled and threading enabled, outqueue.c and
outqueue.h were not compiled. The multi threaded decoder required
these files, so compilation failed.