I don't know the details but I have an impression that there's
no problem in practice if using GCC since people have built xz
with GCC (without patching xz), but renaming the variable cannot
hurt either.
Thanks to Mark Ashley.
Previously, --block-list and --block-size only worked together
in threaded mode. Boundaries are specified by --block-list, but
--block-size specifies the maximum size for a Block. Now this
works in single-threaded mode too.
Thanks to James M Leddy for the original patch.
Now if --block-list is used in threaded mode, the encoder
won't need to flush at each Block boundary specified via
--block-list. This improves performance a lot, making
threading helpful with --block-list.
The flush timer was reset after LZMA_FULL_FLUSH but since
LZMA_FULL_BARRIER doesn't flush, resetting the timer is
no longer done.
Now liblzma only uses "mythread" functions and types
which are defined in mythread.h matching the desired
threading method.
Before Windows Vista, there is no direct equivalent to
pthread condition variables. Since this package doesn't
use pthread_cond_broadcast(), pre-Vista threading can
still be kept quite simple. The pre-Vista code doesn't
use anything that wasn't already available in Windows 95,
so the binaries should run even on Windows 95 if someone
happens to care.
When --flush-timeout=TIMEOUT is used, xz will use
LZMA_SYNC_FLUSH if read() would block and at least
TIMEOUT milliseconds has elapsed since the previous flush.
This can be useful in realtime-like use cases where the
data is simultanously decompressed by another process
(possibly on a different computer). If new uncompressed
input data is produced slowly, without this option xz could
buffer the data for a long time until it would become
decompressible from the output.
If TIMEOUT is 0, the feature is disabled. This is the default.
This commit affects the compression side. Using xz for
the decompression side for the above purpose doesn't work
yet so well because there is quite a bit of input and
output buffering when decompressing.
The --long-help or man page were not updated yet.
The details of this feature may change.
Testing for end of file was no longer correct after full flushing
became possible with --block-size=SIZE and --block-list=SIZES.
There was no bug in practice though because xz just made a few
unneeded zero-byte reads.
This switches units from microseconds to milliseconds.
New clock_gettime(CLOCK_MONOTONIC) will be used if available.
There is still a fallback to gettimeofday().
Now both reading and writing should be without
race conditions with signals.
They might still be signal handling issues left.
Signals are blocked during many operations to avoid
EINTR but it may cause problems e.g. if writing to
stderr blocks when trying to display an error message.
It is possible that a signal to set user_abort arrives right
before a blocking system call is made. In this case the call
may block until another signal arrives, while the wanted
behavior is to make xz clean up and exit as soon as possible.
After this commit, the race condition is avoided with the
input side which already uses non-blocking I/O. The output
side still uses blocking I/O and thus has the race condition.
POSIX says that fcntl(fd, F_SETFL, flags) returns -1 on
error and "other than -1" on success. This is how it is
documented e.g. on OpenBSD too. On Linux, success with
F_SETFL is always 0 (at least accorinding to fcntl(2)
from man-pages 3.51).
Due to a wrong variable name, when writing a sparse file
to standard output, *all* file status flags were cleared
(to the extent the operating system allowed it) instead of
only clearing the O_APPEND flag. In practice this worked
fine in the common situations on GNU/Linux, but I didn't
check how it behaved elsewhere.
The original flags were still restored correctly. I still
changed the code to use a separate boolean variable to
indicate when the flags should be restored instead of
relying on a special value in stdout_flags.
Input file can be a FIFO or something else that doesn't
support posix_fadvise() so don't check the return value
even with an assertion. Nothing bad happens if the call
to posix_fadvise() fails.
It is a no-op for now, but if an old xz version is used
together with a newer liblzma that supports something new,
then this check becomes important and will stop the old xz
from trying to parse files that it won't understand.
This affects only "xz -lvv". Normal decompression with xz
already detected if Block Header and Index had mismatched
Uncompressed Size fields. So this just makes "xz -lvv"
show such files as corrupt instead of showing the
Uncompressed Size from Index.
Now the interaction of presets and custom filter chains
is described correctly. Earlier it contradicted itself.
Thanks to DevHC who reported these issues on IRC to me
on 2012-12-14.
There was somewhat illogical behavior when --extreme was
specified and mixed with custom filter chains.
Before this commit, "xz -9 --lzma2 -e" was equivalent
to "xz --lzma2". After it is equivalent to "xz -6e"
(all earlier preset options get forgotten when a custom
filter chain is specified and the default preset is 6
to which -e is applied). I find this less illogical.
This also affects the meaning of "xz -9e --lzma2 -7".
Earlier it was equivalent to "xz -7e" (the -e specified
before a custom filter chain wasn't forgotten). Now it
is "xz -7". Note that "xz -7e" still is the same as "xz -e7".
Hopefully very few cared about this in the first place,
so pretty much no one should even notice this change.
Thanks to Conley Moorhous.
This adds lzma_get_progress() to liblzma and takes advantage
of it in xz.
lzma_get_progress() collects progress information from
the thread-specific structures so that fairly accurate
progress information is available to applications. Adding
a new function seemed to be a better way than making the
information directly available in lzma_stream (like total_in
and total_out are) because collecting the information requires
locking mutexes. It's waste of time to do it more often than
the up to date information is actually needed by an application.
Now the following works as you would expect:
echo foo | xz > foo.xz
echo bar | xz >> foo.xz
( xz -dc --single-stream ; xz -dc --single-stream ) < foo.xz
Note that it doesn't work if the input is not seekable
or if there is Stream Padding between the concatenated
.xz Streams.
Spot candidates by running these commands:
git ls-files |xargs perl -0777 -n \
-e 'while (/\b(then?|[iao]n|i[fst]|but|f?or|at|and|[dt]o)\s+\1\b/gims)' \
-e '{$n=($` =~ tr/\n/\n/ + 1); ($v=$&)=~s/\n/\\n/g; print "$ARGV:$n:$v\n"}'
Thanks to Jim Meyering for the original patch.
This is incompatible with the 8.3 support patch made by
Juan Manuel Guerrero. I think this one is nicer, but
I need to get feedback from DOS users before saying
that this is the final version of 8.3 filename support.
Try to avoid overwriting the source file if --force is
used and the generated destination filename refers to
the source file. This can happen with 8.3 filenames where
extra characters are ignored.
If the generated output file refers to a special file
like "con" or "prn", refuse to write to it even if --force
is used.
xz didn't compress setuid/setgid/sticky files and files
with multiple hard links even with --force. This bug was
introduced in 23ac2c44c3.
Thanks to Charles Wilson.
Calling raise() to kill xz when user has pressed C-c
is a bit verbose on OS/2 and DOS/DJGPP. Instead of
calling raise(), set only the exit status to 1.
Most distros want xz linked against shared liblzma, so
it doesn't help much to require --enable-dynamic for that.
Those who want to avoid PIC on x86-32 to get better
performance, can still do it e.g. by using --disable-shared
to compile xz and then another pass to compile shared liblzma.
Part of these static/dynamic tricks were needed for Windows
in the past. Nowadays we rely on GCC and binutils to do the
right thing with auto-import. If the Autotooled build system
needs to support some other toolchain on Windows in the future,
this may need some rethinking.
Lots of content was updated on the xz man page.
Technical improvements:
- Start a new sentence on a new line.
- Use fairly short lines.
- Use constant-width font for examples (where supported).
- Some minor cleanups.
Thanks to Jonathan Nieder for some language fixes.
The code assumed that printing numbers with thousand separators
and decimal points would always produce only US-ASCII characters.
This was used for buffer sizes (with snprintf(), no overflows)
and aligning columns of the progress indicator and --list. That
assumption was wrong (e.g. LC_ALL=fi_FI.UTF-8 with glibc), so
multibyte character support was added in this commit. The old
way is used if the operating system doesn't have enough multibyte
support (e.g. lacks wcwidth()).
The sizes of buffers were increased to accomodate multibyte
characters. I don't know how big they should be exactly, but
they aren't used for anything critical, so it's not too bad.
If they still aren't big enough, I hopefully get a bug report.
snprintf() takes care of avoiding buffer overflows.
Some static buffers were replaced with buffers allocated on
stack. double_to_str() was removed. uint64_to_str() and
uint64_to_nicestr() now share the static buffer and test
for thousand separator support.
Integrity check names "None" and "Unknown-N" (2 <= N <= 15)
were marked to be translated. I had forgot these, plus they
wouldn't have worked correctly anyway before this commit,
because printing tables with multibyte strings didn't work.
Thanks to Marek Černocký for reporting the bug about
misaligned table columns in --list output.
For several people, the limiter causes bigger problems that
it solves, so it is better to have it disabled by default.
Those who want to have a limiter by default need to enable
it via the environment variable XZ_DEFAULTS.
Support for environment variable XZ_DEFAULTS was added. It is
parsed before XZ_OPT and technically identical with it. The
intended uses differ quite a bit though; see the man page.
The memory usage limit can now be set separately for
compression and decompression using --memlimit-compress and
--memlimit-decompress. To set both at once, -M or --memlimit
can be used. --memory was retained as a legacy alias for
--memlimit for backwards compatibility.
The semantics of --info-memory were changed in backwards
incompatible way. Compatibility wasn't meaningful due to
changes in the memory usage limiter functionality.
The memory usage limiter info is no longer shown at the
bottom of xz --long -help.
The memory usage limiter support for removed completely from xzdec.
xz's man page was updated to match the above changes. Various
unrelated fixes were also made to the man page.
message_filters_to_str() converts the filter chain to
a string. message_filters_show() replaces the original
message_filters().
uint32_to_optstr() was also added to show the dictionary
size in nicer format when possible.
The extra space for showing both has been taken from the
sizes field. If the sizes grow big, bigger units than MiB
will be used. It makes it slightly difficult to see that
progress is still happening with huge files, but it should
be OK in practice.
Thanks to Trent W. Buck for <http://bugs.debian.org/574583>
and Jonathan Nieder for suggestions how to fix it.
Originally both base-2 and base-10 were supported, but since
there seems to be little need for base-10 in XZ Utils, treat
everything as base-2 and also be more relaxed about the case
of the first letter of the suffix. Now xz will accept e.g.
KiB, Ki, k, K, kB, and KB, and interpret them all as 1024. The
recommended spelling of the suffixes are still KiB, MiB, and GiB.
It still feels a bit wrong to round 1 byte to 1 MiB but
at least it is now done consistently so that the same
byte value is always rounded the same way to MiB.
Previously the default limit was always 40 % of RAM. The
new limit is a little bit more complex:
- If 40 % of RAM is at least 80 MiB, 40 % of RAM is used
as the limit.
- If 80 % of RAM is over 80 MiB, 80 MiB is used as the limit.
- Otherwise 80 % of RAM is used as the limit.
This should make it possible to decompress files created with
"xz -9" on more systems. Swapping is generally more expected
on systems with less RAM, so higher default limit on them
shouldn't cause too bad surprises in terms of heavy swapping.
Instead, the higher default limit should reduce the number of
bad surprises when it used to prevent decompression of files
created with "xz -9". The DoS prevention system shouldn't be
a DoS itself.
Note that even with the new default limit, a system with 64 MiB
RAM cannot decompress files created with "xz -9" without user
overriding the limit. This should be OK, because if xz is going
to need more memory than the system has RAM, it will run very
very slowly and thus it's good that user has to override the limit
in that case.
If signal handlers haven't been established, then it's
useless to try to block them, especially since the sigset_t
used for blocking hasn't been initialized yet.
The opening of the destination file is now delayed a little.
The coder is initialized, and if decompressing, the memory
usage of the first Block compared against the memory
usage limit before the destination file is opened. This
means that if --force was used, the old "target" file won't
be deleted so easily when something goes wrong very early.
Thanks to Mark K for the bug report.
The above fix required some changes to progress message
handling. Now there is a separate function for setting and
printing the filename. It is used also in list.c.
list_file() now handles stdin correctly (gives an error).
A useless check for user_abort was removed from file_io.c.
This is a bit rough but should be useful for basic things.
Ideas (with detailed examples) about the output format are
welcome.
The output of --robot --list is not necessarily stable yet,
although I don't currently have any plans about changing it.
The man page hasn't been updated yet.
to stdout even if --force is used.
--force will still enable compression of symlinks, but only
in case they point to a regular file.
The new way simply seems more reasonable. It matches gzip's
behavior while the old one matched bzip2's behavior.
a regular file.
Sparse file creation can be disabled with --no-sparse.
I don't promise yet that the name of this option won't
change before 5.0.0. It's possible that the code, that
checks when it is safe to use sparse output on stdout,
is not good enough, and a more flexible command line
option is needed to configure sparse file handling.
Currently --robot works only with --info-memory and
--version. --help and --long-help work too, but --robot
has no effect on them.
Thanks to Jonathan Nieder for the original patches.
I had hoped to keep liblzma as purely a compression
library as possible (e.g. file I/O will go into
a different library), but it seems that applications
linking agaisnt liblzma need some way to determine
the memory usage limit, and knowing the amount of RAM
is one reasonable way to help making such decisions.
Thanks to Jonathan Nieder for the original patch.
Originally the idea was that using LZMA_FULL_FLUSH
with Stream encoder would read the filter chain
from the same array that was used to intialize the
Stream encoder. Since most apps wouldn't use
LZMA_FULL_FLUSH, most apps wouldn't need to keep
the filter chain available after initializing the
Stream encoder. However, due to my mistake, it
actually required keeping the array always available.
Since setting the new filter chain via the array
used at initialization time is not a nice way to do
it for a couple of reasons, this commit ditches it
and introduces lzma_filters_update(). This new function
replaces also the "persistent" flag used by LZMA2
(and to-be-designed Subblock filter), which was also
an ugly thing to do.
Thanks to Alexey Tourbin for reminding me about the problem
that Stream encoder used to require keeping the filter
chain allocated.
Separate a few reusable components from XZ Utils specific
code. The reusable code is now in "tuklib" modules. A few
more could be separated still, e.g. bswap.h.
Fix some bugs in lzmainfo.
Fix physmem and cpucores code on OS/2. Thanks to Elbert Pol
for help.
Add OpenVMS support into physmem. Add a few #ifdefs to ease
building XZ Utils on OpenVMS. Thanks to Jouk Jansen for the
original patch.
This fixes "make install" on operating systems using
a suffix for executables.
Cygwin is treated specially. The symlink names won't have
.exe suffix even though the executables themselves have.
Thanks to Charles Wilson.
Seems that in addition on Windows and DOS, also OpenBSD
lacks support for %'d style printf() format strings.
So far that is the only modern POSIX-like system I know
with this problem, but after this hack, the thousand
separator shouldn't be a problem on any system.
Maybe testing if a format string like %'d produces
reasonable output is invoking undefined behavior on some
systems, but so far all the problematic systems I've tried
just print the raw format string (e.g. %'d prints 'd).
Maybe Autoconf test would have been better, but this
hack works also for cross-compilation, and avoids
recompilation in case the system libc starts to support
the thousand separator.
like "un", "cat", and "lz" when determining if
xz is run as unxz, xzcat, lzma, unlzma, or lzcat.
This is to ensure that if xz is renamed (e.g. via
--program-transform-name), it doesn't so easily
work in wrong mode.
use AC_PROG_SED. We don't do anything fancy with sed,
so this should work OK. libtool 2.2 sets SED but 1.5
doesn't, so $(SED) happened to work when using libtool 2.2.
files as is to standard output.
This feature is needed to be more compatible with gzip's
behavior. This was more complicated to implement than it
sounds, because the way liblzma is able to return errors with
files of only a few bytes in size. xz now has its own file
type detection code and no longer uses lzma_auto_decoder().
Don't use libtool convenience libraries to avoid recently
discovered long-standing subtle but somewhat severe bugs
in libtool (at least 1.5.22 and 2.2.6 are affected). It
was found when porting XZ Utils to Windows
<http://lists.gnu.org/archive/html/libtool/2009-06/msg00070.html>
but the problem is significant also e.g. on GNU/Linux.
Unless --disable-shared is passed to configure, static
library built from a set of convenience libraries will
contain PIC objects. That is, while libtool builds non-PIC
objects too, only PIC objects will be used from the
convenience libraries. On 32-bit x86 (tested on mobile XP2400+),
using PIC instead of non-PIC makes the decompressor 10 % slower
with the default CFLAGS.
So while xz was linked against static liblzma by default,
it got the slower PIC objects unless --disable-shared was
used. I tend develop and benchmark with --disable-shared
due to faster build time, so I hadn't noticed the problem
in benchmarks earlier.
This commit also adds support for building Windows resources
into liblzma and executables.
--format=lzma. This means that xz emulating lzma
doesn't decompress .xz files, while before this
commit it did. The new way is slightly simpler in
code and especially in upcoming documentation.
compressing and decompressing. This should be OK now that
xz automatically scales down the compression settings if
they would exceed the memory usage limit (earlier, the limit
for compression was increased to 90 % because low limit broke
scripts that used "xz -9" on systems with low RAM).
Support spcifying the memory usage limit as a percentage
of RAM (e.g. --memory=50%).
Support --threads=0 to reset the thread limit to the default
value (number of available CPU cores). Use UINT32_MAX instead
of SIZE_MAX as the maximum in args.c. hardware.c was already
expecting uint32_t value.
Cleaned up the output of --help and --long-help.
Don't round the memory usage limit in xzdec --help to avoid
an integer overflow and to not give wrong impression that
the limit is high enough when it may not actually be.
- Don't use Windows-specific code on Windows. The old code
required at least Windows 2000. Now it should work on
Windows 98 and later, and maybe on Windows 95 too.
- Use less precision when showing estimated remaining time.
- Fix some small design issues.
the number of CPU cores. Added support for using sysinfo()
on Linux systems whose libc lacks appropriate sysconf()
support (at least dietlibc). The Autoconf macros were
split into separate files, and CPU core count detection
was moved from hardware.c to cpucores.h. The core count
isn't used for anything real for now, so a problematic
part in process.c was commented out.
Now configure.ac will get the version number directly from
src/liblzma/api/lzma/version.h. The intent is to reduce the
number of places where the version number is duplicated. In
future, support for displaying Git commit ID may be added too.
linked statically or dynamically against liblzma. The
default is still to use static liblzma, but it can now
be changed by passing --enable-dynamic to configure.
Thanks to Mike Frysinger for the original patch.
Fixed a few minor bugs in configure.ac.
lzma_memlimit_encoder and lzma_memlimit_decoder to
lzma_raw_encoder_memlimit and lzma_raw_decoder_memlimit. :-(
Now it is fixed. Hopefully it doesn't cause too much trouble
to those who already thought API is stable.
Half of developers were already forgetting to use these
functions, which could have caused total breakage in some future
liblzma version or even now if --enable-small was used. Now
liblzma uses pthread_once() to do the initializations unless
it has been built with --disable-threads which make these
initializations thread-unsafe.
When --enable-small isn't used, liblzma currently gets needlessly
linked against libpthread (on systems that have it). While it is
stupid for now, liblzma will need threads in future anyway, so
this stupidity will be temporary only.
When --enable-small is used, different code CRC32 and CRC64 is
now used than without --enable-small. This made the resulting
binary slightly smaller, but the main reason was to clean it up
and to handle the lack of lzma_init_check().
The pkg-config file lzma.pc was renamed to liblzma.pc. I'm not
sure if it works correctly and portably for static linking
(Libs.private includes -pthread or other operating system
specific flags). Hopefully someone complains if it is bad.
lzma_rc_prices[] is now included as a precomputed array even
with --enable-small. It's just 128 bytes now that it uses uint8_t
instead of uint32_t. Smaller array seemed to be at least as fast
as the more bloated uint32_t array on x86; hopefully it's not bad
on other architectures.