• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

doc/H03-May-2022-1,060904

libzpaq/H21-Oct-2016-3,7292,863

lzma/H03-May-2022-8,7547,089

m4/H21-Oct-2016-9,2248,326

man/H03-May-2022-1,7361,220

AUTHORSH A D09-Mar-2015897 2421

BUGSH A D09-Mar-2015219 96

COPYINGH A D09-Mar-201517.7 KiB340281

ChangeLogH A D10-Jun-201647.4 KiB1,027935

INSTALLH A D09-Mar-201515.2 KiB366284

Lrzip.hH A D10-Mar-201531.1 KiB767121

Makefile.amH A D10-Jun-20162.3 KiB127103

Makefile.inH A D03-May-202248.1 KiB1,4051,264

README-NOT-BACKWARD-COMPATIBLEH A D09-Mar-20153.5 KiB10576

README.mdH A D10-Jun-201620.7 KiB484361

TODOH A D09-Mar-2015702 2615

WHATS-NEWH A D10-Mar-201520.5 KiB530423

aclocal.m4H A D21-Oct-201641.4 KiB1,1601,054

aes.cH A D09-Mar-201516.1 KiB546387

aes.hH A D09-Mar-20154.3 KiB14140

autogen.shH A D09-Mar-2015424 1815

compileH A D21-Oct-20167.2 KiB348258

config.guessH A D21-Oct-201642.5 KiB1,4421,249

config.h.inH A D21-Oct-20166 KiB235169

config.subH A D21-Oct-201635.3 KiB1,8141,676

configureH A D21-Oct-2016636.7 KiB21,21717,978

configure.acH A D21-Oct-20165.2 KiB183161

decompress_demo.cH A D09-Mar-20151.5 KiB5938

depcompH A D21-Oct-201623 KiB792502

description-pakH A D09-Mar-20156 21

install-shH A D21-Oct-201614.8 KiB509329

liblrzip.cH A D10-Jun-201614.9 KiB742618

liblrzip_demo.cH A D09-Mar-201510.8 KiB346305

liblrzip_private.hH A D09-Mar-2015416 2318

lrzip.cH A D17-Oct-201639.3 KiB1,3431,087

lrzip.pc.inH A D09-Mar-2015200 119

lrzip_core.hH A D10-Jun-20162.2 KiB5130

lrzip_private.hH A D03-May-202213.7 KiB545450

lrztarH A D03-May-20225.1 KiB148121

ltmain.shH A D21-Oct-2016316.8 KiB11,1577,986

main.cH A D17-Oct-201621.6 KiB690596

md5.cH A D09-Mar-201514.9 KiB463288

md5.hH A D09-Mar-20154 KiB11947

missingH A D21-Oct-20166.7 KiB216143

runzip.cH A D10-Jun-201613.8 KiB488391

runzip.hH A D09-Mar-2015929 285

rzip.cH A D10-Jun-201634.9 KiB1,249952

rzip.hH A D10-Jun-2016933 286

sha4.cH A D09-Mar-201510.2 KiB325236

sha4.hH A D09-Mar-20152.7 KiB9731

stream.cH A D10-Jun-201653.7 KiB1,9091,460

stream.hH A D10-Jun-20162.1 KiB4724

util.cH A D14-Jun-201613.5 KiB444332

util.hH A D10-Jun-20165.1 KiB172130

README-NOT-BACKWARD-COMPATIBLE

1lrzip-0.60 update
2
3All files created with lrzip 0.6x are not backward compatible with versions
4prior to 0.60. v0.6x can read files generated with earlier versions.
5
6Con Kolivas March 2011.
7
8lrzip-0.50 update
9
10All files created with lrzip 0.5x are not backward compatible with versions
11prior to 0.50. v0.50 can read earlier generated files.
12
13lrzip-0.41 update
14
15Files created with lrzip 0.41 and selecting the -z option for
16ZPAQ compression are not backwardly compatible.
17
18lrzip-0.40 update!
19
20FILES CREATED WITH LRZIP 0.40+ are not backward compatible with
21versions prior to 0.40. The file format was completely changed
22to 64bit addressing throughout to allow massive compression windows
23on massive files. v0.40+ will detect older version files and
24decompress them fine though, but will always generate files in
25the new format.
26
27Con Kolivas November 2009.
28
29lrzip-0.24 update!
30
31FILES CREATED WITH LRZIP 0.23 and earlier are NOT
32BACKWARD COMPATIBLE if compressed with LZMA.
33
34All other compression schemes are compatible.
35
36The lrz file header is changed. It now stores the encoded
37parameters LZMA uses in bytes 16-20. This is a departure
38from the method used in lrzip-0.23.
39
40Please preserve the binary of lrzip-0.23 or earlier if you
41require access to lrzip files using LZMA compression created
42with an earlier version.
43
44FILES CREATED WITH LRZIP-0.22 MAY NOT BE BACKWARD COMPATIBLE!
45
46lrzip-0.22 uses a slightly different and improved method of
47compressing and decompressing files compared to lrzip-0.19 and
48earlier versions.
49
50ANY FILE COMPRESSED WITH LZMA USING A COMPRESSION LEVEL > 7
51cannot be decompressed with any earlier version of lrzip.
52
53ANY FILE COMPRESSED WITH LZMA USING A COMPRESSION LEVEL <=7
54CAN be decompressed with earlier versions of lrzip.
55
56ANY FILE COMPRESSED WITH AN EARLIER VERSION OF LRZIP CAN
57be decompressed with lrzip-0.22
58---------------------------------------------------------
59Brief Technical discussion.
60
61Earlier versions of lrzip used a variable dictionary buffer size
62when compressing files with LZMA. It used a formula of
63Compression Level + 14 bits. LZMA Dictionary buffer size was
64computed as 2^(level+14).  2MB, 21 bits had been the default for
65compression level 7. Level 8 was 4MB and level 9, 8MB.
66
67The default decompression level was fixed at 23 bits, 8MB. This
68was equal to the (then) largest possible dictionary buffer size,
699+14=23, 2^23=8MB. So all data regardless of compression level
70could decompress.
71
72Beginning in lrzip-0.22, the default dictionary buffer size is
73Level + 16 bits (7+16=23 bits or 8MB). Files compressed with the
74default level or lower CAN be decompressed with an earlier lrzip
75version.
76
77Since the the maximum dictionary buffer size for lrzip-0.22 is
78now 25 bits, or 32MB. Files compressed using level 8 or level 9
79(24 or 25 bits) cannot be decompressed with earlier versions of
80lrzip since the fixed dictionary buffer size of 8MB used for
81decompression in lrzip-0.19 and earlier cannot hold the data from
82lrzip-0.22.
83
84Here is a table to show what can and cannot be decompressed with
85lrzip-0.19 and earlier
86
87LRZIP-0.22	LRZIP-0.19
88COMPRESSION	CAN		DICTIONARY
89LEVEL		DECOMPRESS?	BUFFER SIZE
90-----------	-----------	-----------
91<=7		YES		<=8MB (2^23)
928		NO		16MB  (2^24)
939		NO		32MB  (2^25)
94
95lrzip-0.22 can decompress all earlier files.
96
97lrzip-0.22 uses three bytes in the compressed file to store the
98compression level used. Thus, when decompressing, lrzip will read
99the proper dictionary buffer size and use it when decompressing
100the file. See the file magic.header.txt for more information.
101
102January 2008
103Peter Hyman
104pete@peterhyman.com
105

README.md

1lrzip - Long Range ZIP or LZMA RZIP
2===================================
3
4A compression utility that excels at compressing large files (usually > 10-50 MB).
5Larger files and/or more free RAM means that the utility will be able to more
6effectively compress your files (ie: faster / smaller size), especially if the
7filesize(s) exceed 100 MB. You can either choose to optimise for speed (fast
8compression / decompression) or size, but not both.
9
10
11### haneefmubarak's TL;DR for the long explanation:
12
13Just change the word `directory` to the name of the directory you wish to compress.
14
15#### Compression:
16
17```bash
18lrzdir=directory; tar cvf $lrzdir; lrzip -Ubvvp `nproc` -S .bzip2-lrz -L 9 $lrzdir.tar; rm -fv $lrzdir.tar; unset lrzdir
19```
20
21`tar`s the directory, then maxes out all of the system's processor cores
22along with sliding window RAM to give the best **BZIP2** compression while being as fast as possible,
23enables max verbosity output, attaches the extension `.bzip2-lrz`, and finally
24gets rid of the temporary tarfile. Uses a tempvar `lrzdir` which is unset automatically.
25
26#### Decompression for the kind of file from above:
27
28```bash
29lrzdir=directory; lrunzip -cdivvp `nproc` -o $lrzdir.tar $lrzdir.tar.bzip2-lrz; tar xvf $lrzdir.tar; rm -vf $lrzdir.tar
30```
31
32Checks integrity, then decompresses the directory using all of the
33processor cores for max speed, enables max verbosity output, unarchives
34the resulting tarfile, and finally gets rid of the temporary tarfile. Uses the same kind of tempvar.
35
36
37### lrzip build/install guide:
38
39A quick guide on building and installing.
40
41#### What you will need
42
43 - gcc
44 - bash or zsh
45 - pthreads
46 - tar
47 - libc
48 - libm
49 - libz-dev
50 - libbz2-dev
51 - liblzo2-dev
52 - coreutils
53 - nasm on x86, not needed on x64
54 - git if you want a repo-fresh copy
55 - an OS with the usual *nix headers and libraries
56
57#### Obtaining the source
58
59Two different ways of doing this:
60
61Stable: Packaged tarball that is known to work:
62
63Go to <https://github.com/ckolivas/lrzip/releases> and downlaod the `tar.gz`
64file from the top. `cd` to the directory you downloaded, and use `tar xvzf lrzip-X.X.tar.gz`
65to extract the files (don't forget to replace `X.X` with the correct version). Finally, cd
66into the directory you just extracted.
67
68Latest: `git clone -v https://github.com/ckolivas/lrzip.git; cd lrzip`
69
70#### Build
71
72```bash
73./autogen.sh
74./configure
75make -j `nproc` # maxes out all cores
76```
77
78#### Install
79
80Simple 'n Easy™: `sudo make install`
81
82### lrzip 101:
83
84|Command|Result|
85|------|------|
86|`lrztar directory`|An archive `directory.tar.lrz` compressed with **LZMA**.|
87|`lrzuntar directory.tar.lrz`|A directory extracted from a `lrztar` archive.|
88|`lrzip filename`|An archive `filename.lrz` compressed with **LZMA**, meaning slow compression and fast decompression.|
89|`lrzip -z filename`|An archive "filename.lrz" compressed with **ZPAQ** that can give extreme compression, but takes a bit longer than forever to compress and decompress.|
90|`lrzip -l filename`|An archive lightly compressed with **LZO**, meaning really, really fast compression and decompression.|
91|`lrunzip filename.lrz`|Decompress filename.lrz to filename.|
92|`lrz filename`|As per lrzip above but with gzip compatible semantics (i.e. will be quiet and delete original file)
93|`lrz -d filename.lrz`|As per lrunzip above but with gzip compatible semantics (i.e. will be quiet and delete original file)
94
95### lrzip internals
96
97lrzip uses an extended version of [rzip](http://rzip.samba.org/) which does a first pass long distance
98redundancy reduction. lrzip's modifications allow it to scale to accommodate various memory sizes.
99
100Then, one of the following scenarios occurs:
101
102 - Compressed
103  - (default) **LZMA** gives excellent compression @ ~2x the speed of bzip2
104  - **ZPAQ** gives extreme compression while taking forever
105  - **LZO** gives insanely fast compression that can actually be faster than simply copying a large file
106  - **GZIP** gives compression almost as fast as LZO but with better compression
107  - **BZIP2** is a defacto linux standard and hacker favorite which usually gives
108  quite good compression (ZPAQ>LZMA>BZIP2>GZIP>LZO) while staying fairly fast (LZO>GZIP>BZIP2>LZMA>ZPAQ);
109  in other words, a good middle-ground and a good choice overall
110 - Uncompressed, in the words of the software's original author:
111
112> Leaving it uncompressed and rzip prepared. This form improves substantially
113> any compression performed on the resulting file in both size and speed (due to
114> the nature of rzip preparation merging similar compressible blocks of data and
115> creating a smaller file). By "improving" I mean it will either speed up the
116> very slow compressors with minor detriment to compression, or greatly increase
117> the compression of simple compression algorithms.
118>
119> (Con Kolivas, from the original lrzip README)
120
121
122The only real disadvantages:
123
124 - The main program, lrzip, only works on single files, and therefore
125 requires the use of an lrztar wrapper to fake a complete archiver.
126 - lrzip requires quite a bit of memory along with a modern processor
127 to get the best performance in reasonable time. This usually means that
128 it is somewhat unusable with less than 256 MB. However, decompression
129 usually requires less RAM and can work on less powerful machines with much
130 less RAM. On machines with less RAM, it may be a good idea to enable swap
131 if you want to keep your operating system happy.
132 - Piping output to and/or from STDIN and/or STDOUT works fine with both
133 compression and decompression, but larger files compressed this way will
134 likely end up being compressed less efficiently. Decompression doesn't
135 really have any issues with piping, though.
136
137One of the more unique features of lrzip is that it will try to use all of
138the available RAM as best it can at all times to provide maximum benefit. This
139is the default operating method, where it will create and use the single
140largest memory window that will still fit in available memory without freezing
141up the system. It does this by `mmap`ing the small portions of the file that
142it is working on. However, it also has a unique "sliding `mmap`" feature, which
143allows it to use compression windows that far exceed the size of your RAM if
144the file you are compressing is large. It does this by using one large `mmap`
145along with a smaller moving `mmap` buffer to track the part of the file that
146is currently being examined. From a higher level, this can be seen as simply
147emulating a single, large `mmap` buffer. The unfortunate thing about this
148feature is that it can become extremely slow. The counter-argument to
149being slower is that it will usually give a better compression factor.
150
151The file `doc/README.benchmarks` has some performance examples to show
152what kind of data lrzip is good with.
153
154
155
156### FAQ
157
158> Q: What  kind of encryption does lrzip use?
159
160> A: lrzip uses SHA2-512 repetitive hashing of the password along with a salt
161> to provide a key which is used by AES-128 to do block encryption. Each block
162> has more random salts added to the block key. The amount of initial hashing
163> increases as the timestamp goes forward, in direct relation to Moore's law,
164> which means that the amount of time required to encrypt/decrypt the file
165> stays the same on a contemporary computer. It is virtually
166> guaranteed that the same file encrypted with the same password will never
167> be the same twice. The weakest link in this encryption mode by far is the
168> password chosen by the user. There is currently no known attack or backdoor
169> for this encryption mechanism, and there is absolutely no way of retrieving
170> your password should you forget it.
171
172> Q: How do I make a static build?
173
174> A: `./configure --enable-static-bin`
175
176> Q: I want the absolute maximum compression I can possibly get, what do I do?
177
178> A: Try the command line options "-Uzp 1 -L 9". This uses all available ram and
179> ZPAQ compression, and even uses a compression window larger than you have ram.
180> The -p 1 option disables multithreading which improves compression but at the
181> expense of speed. Expect it to take many times longer.
182
183> Q: I want the absolute fastest decent compression I can possibly get.
184
185> A: Try the command line option -l. This will use the  lzo backend compression,
186> and level 7 compression (1 isn't much faster).
187
188> Q: How much slower is the unlimited mode?
189
190> A: It depends on 2 things. First, just how much larger than your ram the file
191is, as the bigger the difference, the slower it will be. The second is how much
192redundant data there is. The more there is, the slower, but ultimately the
193better the compression. Why isn't it on by default? If the compression window is
194a LOT larger than ram, with a lot of redundant information it can be drastically
195slower. I may revisit this possibility in the future if I can make it any
196faster.
197
198> Q: Can I use your tool for even more compression than lzma offers?
199
200> A: Yes, the rzip preparation of files makes them more compressible by most
201other compression technique I have tried. Using the -n option will generate
202a .lrz file smaller than the original which should be more compressible, and
203since it is smaller it will compress faster than it otherwise would have.
204
205> Q: 32bit?
206
207> A: 32bit machines have a limit of 2GB sized compression windows due to
208userspace limitations on mmap and malloc, so even if you have much more ram
209you will not be able to use compression windows larger than 2GB. Also you
210may be unable to decompress files compressed on 64bit machines which have
211used windows larger than 2GB.
212
213> Q: How about 64bit?
214
215> A: 64bit machines with their ability to address massive amounts of ram will
216excel with lrzip due to being able to use compression windows limited only in
217size by the amount of physical ram.
218
219> Q: Other operating systems?
220
221> A: The code is POSIXy with GNU extensions. Patches are welcome. Version 0.43+
222should build on MacOSX 10.5+
223
224> Q: Does it work on stdin/stdout?
225
226> A: Yes it does. Compression and decompression work well to/from STDIN/STDOUT.
227However because lrzip does multiple passes on the data, it has to store a
228large amount in ram before it dumps it to STDOUT (and vice versa), thus it
229is unable to work with the massive compression windows regular operation
230provides. Thus the compression afforded on files larger than approximately
23125% RAM size will be less efficient (though still benefiting compared to
232traditional compression formats).
233
234> Q: I have another compression format that is even better than zpaq, can you
235use that?
236
237> A: You can use it yourself on rzip prepared files (see above). Alternatively
238if the source code is compatible with the GPL license it can be added to the
239lrzip source code. Libraries with functions similar to compress() and
240decompress() functions of zlib would make the process most painless. Please
241tell me if you have such a library so I can include it :)
242
243> Q: What's this "Starting lzma back end compression thread..." message?
244
245> A: While I'm a big fan of progress percentage being visible, unfortunately
246lzma compression can't currently be tracked when handing over 100+MB chunks
247over to the lzma library. Therefore you'll see progress percentage until
248each chunk is handed over to the lzma library.
249
250> Q: What's this "lzo testing for incompressible data" message?
251
252> A: Other compression is much slower, and lzo is the fastest. To help speed up
253the process, lzo compression is performed on the data first to test that the
254data is at all compressible. If a small block of data is not compressible, it
255tests progressively larger blocks until it has tested all the data (if it fails
256to compress at all). If no compressible data is found, then the subsequent
257compression is not even attempted. This can save a lot of time during the
258compression phase when there is incompressible dat
259> A: Theoretically it may be
260possible that data is compressible by the other backend (zpaq, lzma etc) and not
261at all by lzo, but in practice such data achieves only minuscule amounts of
262compression which are not worth pursuing. Most of the time it is clear one way
263or the other that data is compressible or not. If you wish to disable this
264test and force it to try compressing it anyway, use -T.
265
266> Q: I have truckloads of ram so I can compress files much better, but can my
267generated file be decompressed on machines with less ram?
268
269> A: Yes. Ram requirements for decompression go up only by the -L compression
270option with lzma and are never anywhere near as large as the compression
271requirements. However if you're on 64bit and you use a compression window
272greater than 2GB, it might not be possible to decompress it on 32bit machines.
273
274> Q: Why are you including bzip2 compression?
275
276> A: To maintain a similar compression format to the original rzip (although the
277other modes are more useful).
278
279> Q: What about multimedia?
280
281> A: Most multimedia is already in a heavily compressed "lossy" format which by
282its very nature has very little redundancy. This means that there is not
283much that can actually be compressed. If your video/audio/picture is in a
284high bitrate, there will be more redundancy than a low bitrate one making it
285more suitable to compression. None of the compression techniques in lrzip are
286optimised for this sort of data.
287> A: However, the nature of rzip preparation
288means that you'll still get better compression than most normal compression
289algorithms give you if you have very large files. ISO images of dvds for
290example are best compressed directly instead of individual .VOB files. ZPAQ is
291the only compression format that can do any significant compression of
292multimedia.
293> A:
294
295> Q: Is this multithreaded?
296
297> A: As of version 0.540, it is HEAVILY multithreaded with the back end
298compression and decompression phase, and will continue to process the rzip
299pre-processing phase so when using one of the more CPU intensive backend
300compressions like lzma or zpaq, SMP machines will show massive speed
301improvements. Lrzip will detect the number of CPUs to use, but it can be
302overridden with the -p option if the slightly better compression is desired
303more than speed. -p 1 will give the best compression but also be the slowest.
304
305> Q: This uses heaps of memory, can I make it use less?
306
307> A: Well you can by setting -w to the lowest value (1) but the huge use of
308memory is what makes the compression better than ordinary compression
309programs so it defeats the point. You'll still derive benefit with -w 1 but
310not as much.
311
312> Q: What CFLAGS should I use?
313
314> A: With a recent enough compiler (gcc>4) setting both CFLAGS and CXXFLAGS to
315	-O2 -march=native -fomit-frame-pointer
316
317> Q: What compiler does this work with?
318
319> A: It has been tested on gcc, ekopath and the intel compiler successfully
320previously. Whether the commercial compilers help or not, I could not tell you.
321
322> Q: What codebase are you basing this on?
323
324> A: rzip v2.1 and lzma sdk920, but it should be possible to stay in sync with
325each of these in the future.
326
327> Q: Do we really need yet another compression format?
328
329> A: It's not really a new one at all; simply a reimplementation of a few very
330good performing ones that will scale with memory and file size.
331
332> Q: How do you use lrzip yourself?
333
334> A: Three basic uses. I compress large files currently on my drive with the
335-l option since it is so quick to get a space saving. When archiving data for
336permanent storage I compress it with the default options. When compressing
337small files for distribution I use the -z option for the smallest possible
338size.
339
340> Q: I found a file that compressed better with plain lzm
341> A: How can that be?
342
343> A: When the file is more than 5 times the size of the compression window
344you have available, the efficiency of rzip preparation drops off as a means
345of getting better compression. Eventually when the file is large enough,
346plain lzma compression will get better ratios. The lrzip compression will be
347a lot faster though. The only way around this is to use as large compression
348windows as possible with -U option.
349
350> Q: Can I use swapspace as ram for lrzip with a massive window?
351
352> A: It will indirectly do this with -U (unlimited) mode enabled. This mode will
353make the compression window as big as the file itself no matter how big it is,
354but it will slow down proportionately more the bigger the file is than your ram.
355
356> Q: Why do you nice it to +19 by default? Can I speed up the compression by
357changing the nice value?
358
359> A: This is a common misconception about what nice values do. They only tell the
360cpu process scheduler how to prioritise workloads, and if your application is
361the _only_ thing running it will be no faster at nice -20 nor will it be any
362slower at +19.
363
364> Q: What is the LZO Testing option, -T?
365
366> A: LZO testing is normally performed for the slower back-end compression of LZMA
367and ZPA> Q: The reasoning is that if it is completely incompressible by LZO then
368it will also be incompressible by them. Thus if a block fails to be compressed
369by the very fast LZO, lrzip will not attempt to compress that block with the
370slower compressor, thereby saving time. If this option is enabled, it will
371bypass the LZO testing and attempt to compress each block regardless.
372
373> Q: Compression and decompression progress on large archives slows down and
374speeds up. There's also a jump in the percentage at the end?
375
376> A: Yes, that's the nature of the compression/decompression mechanism. The jump
377is because the rzip preparation makes the amount of data much smaller than the
378compression backend (lzma) needs to compress.
379
380> Q: Tell me about patented compression algorithms, GPL, lawyers and copyright.
381
382> A: No
383
384> Q: I receive an error "LZMA ERROR: 2. Try a smaller compression window."
385   what does this mean?
386
387> A: LZMA requests large amounts of memory. When a higher compression window is
388   used, there may not be enough contiguous memory for LZM
389> A: LZMA may request
390   up to 25% of TOTAL ram depending on compression level. If contiguous blocks
391   of memory are not free, LZMA will return an error. This is not a fatal
392   error, and a backup mode of compression will be used.
393
394> Q: Where can I get more information about the internals of LZMA?
395
396> A: See http://www.7-zip.org and http://www.p7zip.org. Also, see the file
397   ./lzma/C/lzmalib.h which explains the LZMA properties used and the LZMA
398   memory requirements and computation.
399
400> Q: This version is much slower than the old version?
401
402> A: Make sure you have set CFLAGS and CXXFLAGS. An unoptimised build will be
403almost 3 times slower.
404
405> Q: Why not update to the latest version of libzpaq?
406
407> A: For reasons that are unclear the later versions of libzpaq create
408corrupt archives when included with lrzip
409
410#### LIMITATIONS
411Due to mmap limitations the maximum size a window can be set to is currently
4122GB on 32bit unless the -U option is specified. Files generated on 64 bit
413machines with windows >2GB in size might not be decompressible on 32bit
414machines. Large files might not decompress on machines with less RAM if SWAP is
415disabled.
416
417#### BUGS:
418Probably lots. <https://github.com/ckolivas/lrzip/issues> if you spot any :D
419
420 Any known ones should be documented
421in the file BUGS.
422
423
424
425#### Backends:
426
427rzip:
428<http://rzip.samba.org/>
429
430lzo:
431<http://www.oberhumer.com/opensource/lzo/>
432
433lzma:
434<http://www.7-zip.org/>
435
436zpaq:
437<http://mattmahoney.net/dc/>
438
439### Thanks (CONTRIBUTORS)
440
441|Person(s)|Thanks for|
442|---|---|
443|`Andrew Tridgell`|`rzip`|
444|`Markus Oberhumer`|`lzo`|
445|`Igor Pavlov`|`lzma`|
446|`Jean-Loup Gailly & Mark Adler`|`zlib`|
447|***`Con Kolivas`***|***Original Code, binding all of this together, managing the project, original `README`***|
448|`Christian Leber`|`lzma` compatibility layer|
449|`Michael J Cohen`|Darwin/OSX support|
450|`Lasse Collin`|fixes to `LZMALib.cpp` and `Makefile.in`|
451|Everyone else who coded along the way (add yourself where appropriate if that's you)|Miscellaneous Coding|
452|**`Peter Hyman`**|Most of the `0.19` to `0.24` changes|
453|`^^^^^^^^^^^`|Updating the multithreaded `lzma` lib
454|`^^^^^^^^^^^`|All sorts of other features
455|`René Rhéaume`|Fixing executable stacks|
456|`Ed Avis`|Various fixes|
457|`Matt Mahoney`|`zpaq` integration code|
458|`Jukka Laurila`|Additional Darwin/OSX support|
459|`George Makrydakis`|`lrztar` wrapper|
460|`Ulrich Drepper`|*special* implementation of md5|
461|**`Michael Blumenkrantz`**|New config tools|
462|`^^^^^^^^^^^^^^^^^^^^`|`liblrzip`|
463|Authors of `PolarSSL`|Encryption code|
464|`Serge Belyshev`|Extensive help, advice, and patches to implement secure encryption|
465|`Jari Aalto`|Fixing typos, esp. in code|
466|`Carlo Alberto Ferraris`|Code cleanup
467|`Peter Hyman`|Additional documentation|
468|`Haneef Mubarak`|Cleanup, Rewrite, and GH Markdown of `README` --> `README.md`|
469
470Persons above are listed in chronological order of first contribution to **lrzip**. Person(s) with names in **bold** have multiple major contributions, person(s) with names in *italics* have made massive contributions, person(s) with names in ***both*** have made innumerable massive contributions.
471
472#### README Authors
473
474Con Kolivas (`ckolivas` on GitHub) <kernel@kolivas.org>
475Fri, 10 June 2016: README
476
477Also documented by
478Peter Hyman <pete@peterhyman.com>
479Sun, 04 Jan 2009: README
480
481Mostly Rewritten + GFMified:
482Haneef Mubarak (haneefmubarak on GitHub)
483Sun/Mon Sep 01-02 2013: README.md
484