When the uncompressed size is known to be exact, after decompressing
the stream exactly comp_size bytes of input must have been consumed.
This is a minor improvement to error detection.
The caller must still not specify an uncompressed size bigger
than the actual uncompressed size.
As a downside, this now needs the exact compressed size.
Right now this is just a planned extra-compact format for use
in the EROFS file system in Linux. At this point it's possible
that the format will either change or be abandoned and removed
completely.
The special thing about the encoder is that it uses the
output-size-limited encoding added in the previous commit.
EROFS uses fixed-sized blocks (e.g. 4 KiB) to hold compressed
data so the compressors must be able to create valid streams
that fill the given block size.
With this it is possible to encode LZMA1 data without EOPM so that
the encoder will encode as much input as it can without exceeding
the specified output size limit. The resulting LZMA1 stream will
be a normal LZMA1 stream without EOPM. The actual uncompressed size
will be available to the caller via the uncomp_size pointer.
One missing thing is that the LZMA layer doesn't inform the LZ layer
when the encoding is finished and thus the LZ may read more input
when it won't be used. However, this doesn't matter if encoding is
done with a single call (which is the planned use case for now).
For proper multi-call encoding this should be improved.
This commit only adds the functionality for internal use.
Nothing uses it yet.
Before this commit all output queue buffers were allocated as
a single big allocation. Now each buffer is allocated separately
when needed. Used buffers are cached to avoid reallocation
overhead but the cache will keep only one buffer size at a time.
This should make things work OK in the decompression where most
of the time the buffer sizes will be the same but with some less
common files the buffer sizes may vary.
While this should work fine, it's still a bit preliminary
and may even get reverted if it turns out to be useless for
decompression.
When Intel CET is enabled, we need to include <cet.h> in assembly codes
to mark Intel CET support and add _CET_ENDBR to indirect jump targets.
Tested on Intel Tiger Lake under CET enabled Linux.
The comment didn't match the value of RC_SYMBOLS_MAX and the value
itself was slightly larger than actually needed. The only harm
about this was that memory usage was a few bytes larger.
Using the aligned methods requires more care to ensure that
the address really is aligned, so it's nicer if the aligned
methods are prefixed. The next commit will remove the unaligned_
prefix from the unaligned methods which in liblzma are used in
more places than the aligned ones.
LZMA_TIMED_OUT is *internally* used as a value for lzma_ret
enumeration. Previously it was #defined to 32 and cast to lzma_ret.
That way it wasn't visible in the public API, but this was hackish.
Now the public API has eight LZMA_RET_INTERNALx members and
LZMA_TIMED_OUT is #defined to LZMA_RET_INTERNAL1. This way
the code is cleaner overall although the public API has a few
extra mysterious enum members.
I should have always known this but I didn't. Here is an example
as a reminder to myself:
int mycopy(void *dest, void *src, size_t n)
{
memcpy(dest, src, n);
return dest == NULL;
}
In the example, a compiler may assume that dest != NULL because
passing NULL to memcpy() would be undefined behavior. Testing
with GCC 8.2.1, mycopy(NULL, NULL, 0) returns 1 with -O0 and -O1.
With -O2 the return value is 0 because the compiler infers that
dest cannot be NULL because it was already used with memcpy()
and thus the test for NULL gets optimized out.
In liblzma, if a null-pointer was passed to memcpy(), there were
no checks for NULL *after* the memcpy() call, so I cautiously
suspect that it shouldn't have caused bad behavior in practice,
but it's hard to be sure, and the problematic cases had to be
fixed anyway.
Thanks to Jeffrey Walton.
FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION is #defined when liblzma
is being built for fuzz testing.
Most fuzzed inputs would normally get rejected because of incorrect
CRC32 and the actual header decoding code wouldn't get fuzzed.
Disabling CRC32 checks avoids this problem. The fuzzer program
must still use LZMA_IGNORE_CHECK flag to disable verification of
integrity checks of uncompressed data.
The 0 got treated specially in a buggy way and as a result
the function did nothing. The API doc said that 0 was supposed
to return LZMA_PROG_ERROR but it didn't.
Now 0 is treated as if 1 had been specified. This is done because
0 is already used to indicate an error from lzma_memlimit_get()
and lzma_memusage().
In addition, lzma_memlimit_set() no longer checks that the new
limit is at least LZMA_MEMUSAGE_BASE. It's counter-productive
for the Index decoder and was actually needed only by the
auto decoder. Auto decoder has now been modified to check for
LZMA_MEMUSAGE_BASE.
It returned LZMA_PROG_ERROR, which was done to avoid zero as
the limit (because it's a special value elsewhere), but using
LZMA_PROG_ERROR is simply inconvenient and can cause bugs.
The fix/workaround is to treat 0 as if it were 1 byte. It's
effectively the same thing. The only weird consequence is
that then lzma_memlimit_get() will return 1 even when 0 was
specified as the limit.
This fixes a very rare corner case in xz --list where a specific
memory usage limit and a multi-stream file could print the
error message "Internal error (bug)" instead of saying that
the memory usage limit is too low.
Only one definition was visible in a translation unit.
It avoided a few casts and temp variables but seems that
this hack doesn't work with link-time optimizations in compilers
as it's not C99/C11 compliant.
Fixes:
http://www.mail-archive.com/xz-devel@tukaani.org/msg00279.html
This is the sane thing to do. The conflict with OpenSSL
on some OSes and especially that the OS-provided versions
can be significantly slower makes it clear that it was
a mistake to have the external SHA-256 support enabled by
default.
Those who want it can now pass --enable-external-sha256 to
configure. INSTALL was updated with notes about OSes where
this can be a bad idea.
The SHA-256 detection code in configure.ac had some bugs that
could lead to a build failure in some situations. These were
fixed, although it doesn't matter that much now that the
external SHA-256 is disabled by default.
MINIX >= 3.2.0 uses NetBSD's libc and thus has SHA256_Init
in libc instead of libutil. Support for the libutil version
was removed.
When optimizing, GCC can reorder code so that an uninitialized
value gets used in a comparison, which makes Valgrind unhappy.
It doesn't happen when compiled with -O0, which I tend to use
when running Valgrind.
Thanks to Rich Prohaska. I remember this being mentioned long
ago by someone else but nothing was done back then.
People shouldn't rely on the presets when decoding raw streams,
but xz uses the presets as the starting point for raw decoder
options anyway.
lzma_encocder_presets.c was renamed to lzma_presets.c to
make it clear it's not used solely by the encoder code.
lzma_index_dup() calls index_dup_stream() which, in case of
an error, calls index_stream_end() to free memory allocated
by index_stream_init(). However, it illogically didn't
actually free the memory. To make it logical, the tree
handling code was modified a bit in addition to changing
index_stream_end().
Thanks to Evan Nemerson for the bug report.
I know that soname != app version, but I skip AGE=1
in -version-info to make the soname match the liblzma
version anyway. It doesn't hurt anything as long as
it doesn't conflict with library versioning rules.
This way an invalid filter chain is detected at the Stream
encoder initialization instead of delaying it to the first
call to lzma_code() which triggers the initialization of
the actual filter encoder(s).
POSIX supports $< only in inference rules (suffix rules).
Using it elsewhere is a GNU make extension and doesn't
work e.g. with OpenBSD make.
Thanks to Christian Weisgerber for the patch.
Note that this slightly changes how lzma_block_header_decode()
has been documented. Earlier it said that the .version is set
to the lowest required value, but now it says that the .version
field is kept unchanged if possible. In practice this doesn't
affect any old code, because before this commit the only
possible .version was 0.
The Maj macro is used where multiple things are added
together, so making Maj a sum of two expressions allows
some extra freedom for the compiler to schedule the
instructions.
I learned this trick from
<http://www.hackersdelight.org/corres.txt>.
This looks weird because the rotations become sequential,
but it helps quite a bit on both 32-bit and 64-bit x86:
- It requires fewer instructions on two-operand
instruction sets like x86.
- It requires one register less which matters especially
on 32-bit x86.
I hope this doesn't hurt other archs.
I didn't invent this idea myself, but I don't remember where
I saw it first.
This way a branch isn't needed for each operation
to choose between blk0 and blk2, and still the code
doesn't grow as much as it would with full unrolling.
This commit just adds the function. Its uses will be in
separate commits.
This hasn't been tested much yet and it's perhaps a bit early
to commit it but if there are bugs they should get found quite
quickly.
Thanks to Jun I Jin from Intel for help and for pointing out
that string comparison needs to be optimized in liblzma.
This avoids a memzero() call for a newly-allocated memory,
which can be expensive when encoding small streams with
an over-sized dictionary.
To avoid using lzma_alloc_zero() for memory that doesn't
need to be zeroed, lzma_mf.son is now allocated separately,
which requires handling it separately in normalize() too.
Thanks to Vincenzo Innocente for reporting the problem.
Now --block-list=SIZES works with in the threaded mode too,
although the performance is still bad due to the use of
LZMA_FULL_FLUSH instead of the new LZMA_FULL_BARRIER.
Now liblzma only uses "mythread" functions and types
which are defined in mythread.h matching the desired
threading method.
Before Windows Vista, there is no direct equivalent to
pthread condition variables. Since this package doesn't
use pthread_cond_broadcast(), pre-Vista threading can
still be kept quite simple. The pre-Vista code doesn't
use anything that wasn't already available in Windows 95,
so the binaries should run even on Windows 95 if someone
happens to care.
Previously it was done in configure, but doing that goes
against the Autoconf manual. Autoconf requires that it is
possible to override e.g. prefix after running configure
and that doesn't work correctly if liblzma.pc is created
by configure.
A potential downside of this change is that now e.g.
libdir in liblzma.pc is a standalone string instead of
being defined via ${prefix}, so if one overrides prefix
when running pkg-config the libdir won't get the new value.
I don't know if this matters in practice.
Thanks to Vincent Torri.
To avoid false positives when detecting .lzma files,
rare values in dictionary size and uncompressed size fields
were rejected. They will still be rejected if .lzma files
are decoded with lzma_auto_decoder(), but when using
lzma_alone_decoder() directly, such files will now be accepted.
Hopefully this is an OK compromise.
This doesn't affect xz because xz still has its own file
format detection code. This does affect lzmadec though.
So after this commit lzmadec will accept files that xz or
xz-emulating-lzma doesn't.
NOTE: lzma_alone_decoder() still won't decode all .lzma files
because liblzma's LZMA decoder doesn't support lc + lp > 4.
Reported here:
http://sourceforge.net/projects/lzmautils/forums/forum/708858/topic/7068827
This race condition could cause a deadlock if lzma_end() was
called before finishing the encoding. This can happen with
xz with debugging enabled (non-debugging version doesn't
call lzma_end() before exiting).
This adds lzma_get_progress() to liblzma and takes advantage
of it in xz.
lzma_get_progress() collects progress information from
the thread-specific structures so that fairly accurate
progress information is available to applications. Adding
a new function seemed to be a better way than making the
information directly available in lzma_stream (like total_in
and total_out are) because collecting the information requires
locking mutexes. It's waste of time to do it more often than
the up to date information is actually needed by an application.
There is a tiny risk of causing breakage: If an application
assigns lzma_stream.allocator to a non-const pointer, such
code won't compile anymore. I don't know why anyone would do
such a thing though, so in practice this shouldn't cause trouble.
Thanks to Jan Kratochvil for the patch.
lzma_code() could incorrectly return LZMA_BUF_ERROR if
all of the following was true:
- The caller knows how many bytes of output to expect
and only provides that much output space.
- When the last output bytes are decoded, the
caller-provided input buffer ends right before
the LZMA2 end of payload marker. So LZMA2 won't
provide more output anymore, but it won't know it
yet and thus won't return LZMA_STREAM_END yet.
- A BCJ filter is in use and it hasn't left any
unfiltered bytes in the temp buffer. This can happen
with any BCJ filter, but in practice it's more likely
with filters other than the x86 BCJ.
Another situation where the bug can be triggered happens
if the uncompressed size is zero bytes and no output space
is provided. In this case the decompression can fail even
if the whole input file is given to lzma_code().
A similar bug was fixed in XZ Embedded on 2011-09-19.
Symbol versioning is enabled by default on GNU/Linux,
other GNU-based systems, and FreeBSD.
I'm not sure how stable this is, so it may need
backward-incompatible changes before the next release.
The idea is that alpha and beta symbols are considered
unstable and require recompiling the applications that
use those symbols. Once a symbol is stable, it may get
extended with new features in ways that don't break
compatibility with older ABI & API.
The mydist target runs validate_map.sh which should
catch some probable problems in liblzma.map. Otherwise
I would forget to update the map file for new releases.
If the operating system libc or other base libraries
provide SHA-256, use that instead of our own copy.
Note that this doesn't use OpenSSL or libgcrypt or
such libraries to avoid creating dependencies to
other packages.
This supports at least FreeBSD, NetBSD, OpenBSD, Solaris,
MINIX, and Darwin. They all provide similar but not
identical SHA-256 APIs; everyone is a little different.
Thanks to Wim Lewis for the original patch, improvements,
and testing.
Spot candidates by running these commands:
git ls-files |xargs perl -0777 -n \
-e 'while (/\b(then?|[iao]n|i[fst]|but|f?or|at|and|[dt]o)\s+\1\b/gims)' \
-e '{$n=($` =~ tr/\n/\n/ + 1); ($v=$&)=~s/\n/\\n/g; print "$ARGV:$n:$v\n"}'
Thanks to Jim Meyering for the original patch.
This is the simplest method to do threading, which splits
the uncompressed data into blocks and compresses them
independently from each other. There's room for improvement
especially to reduce the memory usage, but nevertheless,
this is a good start.
Empty Block was created if the input buffer was empty.
Empty Block wastes a few bytes of space, but more importantly
it triggers a bug in XZ Utils 5.0.1 and older when trying
to decompress such a file. 5.0.1 and older consider such
files to be corrupt. I thought that no encoder creates empty
Blocks when releasing 5.0.2 but I was wrong.
The biggest problem was that the integrity check type
wasn't validated, and e.g. lzma_easy_buffer_encode()
would create a corrupt .xz Stream if given an unsupported
Check ID. Luckily applications don't usually try to use
an unsupport Check ID, so this bug is unlikely to cause
many real-world problems.
It leaks old filter options structures (hundred bytes or so)
every time the lzma_stream is reinitialized. With the xz tool,
this happens when compressing multiple files.
The decoder considered empty LZMA2 streams to be corrupt.
This shouldn't matter much with .xz files, because no encoder
creates empty LZMA2 streams in .xz. This bug is more likely
to cause problems in applications that use raw LZMA2 streams.
This has no semantic changes. I find the new names slightly
more logical and they match the names that are already used
in XZ Embedded.
The name fastpos wasn't changed (not worth the hassle).
If any of the reserved members in lzma_stream are non-zero
or non-NULL, LZMA_OPTIONS_ERROR is returned. It is possible
that a new feature in the future is indicated by just setting
a reserved member to some other value, so the old liblzma
version need to catch it as an unsupported feature.