• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

flate/H29-Dec-2019-

fse/H29-Dec-2019-

gzip/H29-Dec-2019-

huff0/H29-Dec-2019-

s2/H29-Dec-2019-

snappy/H29-Dec-2019-

testdata/H03-May-2022-

zip/H29-Dec-2019-

zlib/H29-Dec-2019-

zstd/H29-Dec-2019-

.gitattributesH A D29-Dec-201918

.gitignoreH A D29-Dec-2019266

.goreleaser.ymlH A D29-Dec-20191.2 KiB

.travis.ymlH A D29-Dec-20191.1 KiB

LICENSEH A D29-Dec-20191.5 KiB

README.mdH A D29-Dec-201920.8 KiB

compressible.goH A D29-Dec-20191.8 KiB

compressible_test.goH A D29-Dec-20199.5 KiB

fuzzit.shH A D29-Dec-20193 KiB

README.md

1# compress
2
3This package provides various compression algorithms.
4
5* [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) compression and decompression in pure Go.
6* [S2](https://github.com/klauspost/compress/tree/master/s2#s2-compression) is a high performance replacement for Snappy.
7* Optimized [deflate](https://godoc.org/github.com/klauspost/compress/flate) packages which can be used as a dropin replacement for [gzip](https://godoc.org/github.com/klauspost/compress/gzip), [zip](https://godoc.org/github.com/klauspost/compress/zip) and [zlib](https://godoc.org/github.com/klauspost/compress/zlib).
8* [huff0](https://github.com/klauspost/compress/tree/master/huff0) and [FSE](https://github.com/klauspost/compress/tree/master/fse) implementations for raw entropy encoding.
9* [pgzip](https://github.com/klauspost/pgzip) is a separate package that provides a very fast parallel gzip implementation.
10
11[![Build Status](https://travis-ci.org/klauspost/compress.svg?branch=master)](https://travis-ci.org/klauspost/compress)
12[![Sourcegraph Badge](https://sourcegraph.com/github.com/klauspost/compress/-/badge.svg)](https://sourcegraph.com/github.com/klauspost/compress?badge)
13[![fuzzit](https://app.fuzzit.dev/badge?org_id=klauspost)](https://fuzzit.dev)
14
15# changelog
16
17* Dec 29, 2019: (v1.9.5) zstd: 10-20% faster block compression. [#199](https://github.com/klauspost/compress/pull/199)
18* Dec 29, 2019: [zip](https://godoc.org/github.com/klauspost/compress/zip) package updated with latest Go features
19* Dec 29, 2019: zstd: Single segment flag condintions tweaked. [#197](https://github.com/klauspost/compress/pull/197)
20* Dec 18, 2019: s2: Faster compression when ReadFrom is used. [#198](https://github.com/klauspost/compress/pull/198)
21* Dec 10, 2019: s2: Fix repeat length output when just above at 16MB limit.
22* Dec 10, 2019: zstd: Add function to get decoder as io.ReadCloser. [#191](https://github.com/klauspost/compress/pull/191)
23* Dec 3, 2019: (v1.9.4) S2: limit max repeat length. [#188](https://github.com/klauspost/compress/pull/188)
24* Dec 3, 2019: Add [WithNoEntropyCompression](https://godoc.org/github.com/klauspost/compress/zstd#WithNoEntropyCompression) to zstd [#187](https://github.com/klauspost/compress/pull/187)
25* Dec 3, 2019: Reduce memory use for tests. Check for leaked goroutines.
26* Nov 28, 2019 (v1.9.3) Less allocations in stateless deflate.
27* Nov 28, 2019: 5-20% Faster huff0 decode. Impacts zstd as well. [#184](https://github.com/klauspost/compress/pull/184)
28* Nov 12, 2019 (v1.9.2) Added [Stateless Compression](#stateless-compression) for gzip/deflate.
29* Nov 12, 2019: Fixed zstd decompression of large single blocks. [#180](https://github.com/klauspost/compress/pull/180)
30* Nov 11, 2019: Set default  [s2c](https://github.com/klauspost/compress/tree/master/s2#commandline-tools) block size to 4MB.
31* Nov 11, 2019: Reduce inflate memory use by 1KB.
32* Nov 10, 2019: Less allocations in deflate bit writer.
33* Nov 10, 2019: Fix inconsistent error returned by zstd decoder.
34* Oct 28, 2019 (v1.9.1) ztsd: Fix crash when compressing blocks. [#174](https://github.com/klauspost/compress/pull/174)
35* Oct 24, 2019 (v1.9.0) zstd: Fix rare data corruption [#173](https://github.com/klauspost/compress/pull/173)
36* Oct 24, 2019 zstd: Fix huff0 out of buffer write [#171](https://github.com/klauspost/compress/pull/171) and always return errors [#172](https://github.com/klauspost/compress/pull/172)
37* Oct 10, 2019: Big deflate rewrite, 30-40% faster with better compression [#105](https://github.com/klauspost/compress/pull/105)
38* Oct 10, 2019: (v1.8.6) zstd: Allow partial reads to get flushed data. [#169](https://github.com/klauspost/compress/pull/169)
39* Oct 3, 2019: Fix inconsistent results on broken zstd streams.
40* Sep 25, 2019: Added `-rm` (remove source files) and `-q` (no output except errors) to `s2c` and `s2d` [commands](https://github.com/klauspost/compress/tree/master/s2#commandline-tools)
41* Sep 16, 2019: (v1.8.4) Add `s2c` and `s2d` [commandline tools](https://github.com/klauspost/compress/tree/master/s2#commandline-tools).
42* Sep 10, 2019: (v1.8.3) Fix s2 decoder [Skip](https://godoc.org/github.com/klauspost/compress/s2#Reader.Skip).
43* Sep 7, 2019: zstd: Added [WithWindowSize](https://godoc.org/github.com/klauspost/compress/zstd#WithWindowSize), contributed by [ianwilkes](https://github.com/ianwilkes).
44* Sep 5, 2019: (v1.8.2) Add [WithZeroFrames](https://godoc.org/github.com/klauspost/compress/zstd#WithZeroFrames) which adds full zero payload block encoding option.
45* Sep 5, 2019: Lazy initialization of zstandard predefined en/decoder tables.
46* Aug 26, 2019: (v1.8.1) S2: 1-2% compression increase in "better" compression mode.
47* Aug 26, 2019: zstd: Check maximum size of Huffman 1X compressed literals while decoding.
48* Aug 24, 2019: (v1.8.0) Added [S2 compression](https://github.com/klauspost/compress/tree/master/s2#s2-compression), a high performance replacement for Snappy.
49* Aug 21, 2019: (v1.7.6) Fixed minor issues found by fuzzer. One could lead to zstd not decompressing.
50* Aug 18, 2019: Add [fuzzit](https://fuzzit.dev/) continuous fuzzing.
51* Aug 14, 2019: zstd: Skip incompressible data 2x faster.  [#147](https://github.com/klauspost/compress/pull/147)
52* Aug 4, 2019 (v1.7.5): Better literal compression. [#146](https://github.com/klauspost/compress/pull/146)
53* Aug 4, 2019: Faster zstd compression. [#143](https://github.com/klauspost/compress/pull/143) [#144](https://github.com/klauspost/compress/pull/144)
54* Aug 4, 2019: Faster zstd decompression. [#145](https://github.com/klauspost/compress/pull/145) [#143](https://github.com/klauspost/compress/pull/143) [#142](https://github.com/klauspost/compress/pull/142)
55* July 15, 2019 (v1.7.4): Fix double EOF block in rare cases on zstd encoder.
56* July 15, 2019 (v1.7.3): Minor speedup/compression increase in default zstd encoder.
57* July 14, 2019: zstd decoder: Fix decompression error on multiple uses with mixed content.
58* July 7, 2019 (v1.7.2): Snappy update, zstd decoder potential race fix.
59* June 17, 2019: zstd decompression bugfix.
60* June 17, 2019: fix 32 bit builds.
61* June 17, 2019: Easier use in modules (less dependencies).
62* June 9, 2019: New stronger "default" [zstd](https://github.com/klauspost/compress/tree/master/zstd#zstd) compression mode. Matches zstd default compression ratio.
63* June 5, 2019: 20-40% throughput in [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) compression and better compression.
64* June 5, 2019: deflate/gzip compression: Reduce memory usage of lower compression levels.
65* June 2, 2019: Added [zstandard](https://github.com/klauspost/compress/tree/master/zstd#zstd) compression!
66* May 25, 2019: deflate/gzip: 10% faster bit writer, mostly visible in lower levels.
67* Apr 22, 2019: [zstd](https://github.com/klauspost/compress/tree/master/zstd#zstd) decompression added.
68* Aug 1, 2018: Added [huff0 README](https://github.com/klauspost/compress/tree/master/huff0#huff0-entropy-compression).
69* Jul 8, 2018: Added [Performance Update 2018](#performance-update-2018) below.
70* Jun 23, 2018: Merged [Go 1.11 inflate optimizations](https://go-review.googlesource.com/c/go/+/102235). Go 1.9 is now required. Backwards compatible version tagged with [v1.3.0](https://github.com/klauspost/compress/releases/tag/v1.3.0).
71* Apr 2, 2018: Added [huff0](https://godoc.org/github.com/klauspost/compress/huff0) en/decoder. Experimental for now, API may change.
72* Mar 4, 2018: Added [FSE Entropy](https://godoc.org/github.com/klauspost/compress/fse) en/decoder. Experimental for now, API may change.
73* Nov 3, 2017: Add compression [Estimate](https://godoc.org/github.com/klauspost/compress#Estimate) function.
74* May 28, 2017: Reduce allocations when resetting decoder.
75* Apr 02, 2017: Change back to official crc32, since changes were merged in Go 1.7.
76* Jan 14, 2017: Reduce stack pressure due to array copies. See [Issue #18625](https://github.com/golang/go/issues/18625).
77* Oct 25, 2016: Level 2-4 have been rewritten and now offers significantly better performance than before.
78* Oct 20, 2016: Port zlib changes from Go 1.7 to fix zlib writer issue. Please update.
79* Oct 16, 2016: Go 1.7 changes merged. Apples to apples this package is a few percent faster, but has a significantly better balance between speed and compression per level.
80* Mar 24, 2016: Always attempt Huffman encoding on level 4-7. This improves base 64 encoded data compression.
81* Mar 24, 2016: Small speedup for level 1-3.
82* Feb 19, 2016: Faster bit writer, level -2 is 15% faster, level 1 is 4% faster.
83* Feb 19, 2016: Handle small payloads faster in level 1-3.
84* Feb 19, 2016: Added faster level 2 + 3 compression modes.
85* Feb 19, 2016: [Rebalanced compression levels](https://blog.klauspost.com/rebalancing-deflate-compression-levels/), so there is a more even progresssion in terms of compression. New default level is 5.
86* Feb 14, 2016: Snappy: Merge upstream changes.
87* Feb 14, 2016: Snappy: Fix aggressive skipping.
88* Feb 14, 2016: Snappy: Update benchmark.
89* Feb 13, 2016: Deflate: Fixed assembler problem that could lead to sub-optimal compression.
90* Feb 12, 2016: Snappy: Added AMD64 SSE 4.2 optimizations to matching, which makes easy to compress material run faster. Typical speedup is around 25%.
91* Feb 9, 2016: Added Snappy package fork. This version is 5-7% faster, much more on hard to compress content.
92* Jan 30, 2016: Optimize level 1 to 3 by not considering static dictionary or storing uncompressed. ~4-5% speedup.
93* Jan 16, 2016: Optimization on deflate level 1,2,3 compression.
94* Jan 8 2016: Merge [CL 18317](https://go-review.googlesource.com/#/c/18317): fix reading, writing of zip64 archives.
95* Dec 8 2015: Make level 1 and -2 deterministic even if write size differs.
96* Dec 8 2015: Split encoding functions, so hashing and matching can potentially be inlined. 1-3% faster on AMD64. 5% faster on other platforms.
97* Dec 8 2015: Fixed rare [one byte out-of bounds read](https://github.com/klauspost/compress/issues/20). Please update!
98* Nov 23 2015: Optimization on token writer. ~2-4% faster. Contributed by [@dsnet](https://github.com/dsnet).
99* Nov 20 2015: Small optimization to bit writer on 64 bit systems.
100* Nov 17 2015: Fixed out-of-bound errors if the underlying Writer returned an error. See [#15](https://github.com/klauspost/compress/issues/15).
101* Nov 12 2015: Added [io.WriterTo](https://golang.org/pkg/io/#WriterTo) support to gzip/inflate.
102* Nov 11 2015: Merged [CL 16669](https://go-review.googlesource.com/#/c/16669/4): archive/zip: enable overriding (de)compressors per file
103* Oct 15 2015: Added skipping on uncompressible data. Random data speed up >5x.
104
105# deflate usage
106
107* [High Throughput Benchmark](http://blog.klauspost.com/go-gzipdeflate-benchmarks/).
108* [Small Payload/Webserver Benchmarks](http://blog.klauspost.com/gzip-performance-for-go-webservers/).
109* [Linear Time Compression](http://blog.klauspost.com/constant-time-gzipzip-compression/).
110* [Re-balancing Deflate Compression Levels](https://blog.klauspost.com/rebalancing-deflate-compression-levels/)
111
112The packages are drop-in replacements for standard libraries. Simply replace the import path to use them:
113
114| old import         | new import                              |
115|--------------------|-----------------------------------------|
116| `compress/gzip`    | `github.com/klauspost/compress/gzip`    |
117| `compress/zlib`    | `github.com/klauspost/compress/zlib`    |
118| `archive/zip`      | `github.com/klauspost/compress/zip`     |
119| `compress/flate`   | `github.com/klauspost/compress/flate`   |
120
121You may also be interested in [pgzip](https://github.com/klauspost/pgzip), which is a drop in replacement for gzip, which support multithreaded compression on big files and the optimized [crc32](https://github.com/klauspost/crc32) package used by these packages.
122
123The packages contains the same as the standard library, so you can use the godoc for that: [gzip](http://golang.org/pkg/compress/gzip/), [zip](http://golang.org/pkg/archive/zip/),  [zlib](http://golang.org/pkg/compress/zlib/), [flate](http://golang.org/pkg/compress/flate/).
124
125Currently there is only minor speedup on decompression (mostly CRC32 calculation).
126
127# Stateless compression
128
129This package offers stateless compression as a special option for gzip/deflate.
130It will do compression but without maintaining any state between Write calls.
131
132This means there will be no memory kept between Write calls, but compression and speed will be suboptimal.
133
134This is only relevant in cases where you expect to run many thousands of compressors concurrently,
135but with very little activity. This is *not* intended for regular web servers serving individual requests.
136
137Because of this, the size of actual Write calls will affect output size.
138
139In gzip, specify level `-3` / `gzip.StatelessCompression` to enable.
140
141For direct deflate use, NewStatelessWriter and StatelessDeflate are available. See [documentation](https://godoc.org/github.com/klauspost/compress/flate#NewStatelessWriter)
142
143A `bufio.Writer` can of course be used to control write sizes. For example, to use a 4KB buffer:
144
145```
146	// replace 'ioutil.Discard' with your output.
147	gzw, err := gzip.NewWriterLevel(ioutil.Discard, gzip.StatelessCompression)
148	if err != nil {
149		return err
150	}
151	defer gzw.Close()
152
153	w := bufio.NewWriterSize(gzw, 4096)
154	defer w.Flush()
155
156	// Write to 'w'
157```
158
159This will only use up to 4KB in memory when the writer is idle.
160
161Compression is almost always worse than the fastest compression level
162and each write will allocate (a little) memory.
163
164# Performance Update 2018
165
166It has been a while since we have been looking at the speed of this package compared to the standard library, so I thought I would re-do my tests and give some overall recommendations based on the current state. All benchmarks have been performed with Go 1.10 on my Desktop Intel(R) Core(TM) i7-2600 CPU @3.40GHz. Since I last ran the tests, I have gotten more RAM, which means tests with big files are no longer limited by my SSD.
167
168The raw results are in my [updated spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing). Due to cgo changes and upstream updates i could not get the cgo version of gzip to compile. Instead I included the [zstd](https://github.com/datadog/zstd) cgo implementation. If I get cgo gzip to work again, I might replace the results in the sheet.
169
170The columns to take note of are: *MB/s* - the throughput. *Reduction* - the data size reduction in percent of the original. *Rel Speed* relative speed compared to the standard libary at the same level. *Smaller* - how many percent smaller is the compressed output compared to stdlib. Negative means the output was bigger. *Loss* means the loss (or gain) in compression as a percentage difference of the input.
171
172The `gzstd` (standard library gzip) and `gzkp` (this package gzip) only uses one CPU core. [`pgzip`](https://github.com/klauspost/pgzip), [`bgzf`](https://github.com/biogo/hts/tree/master/bgzf) uses all 4 cores. [`zstd`](https://github.com/DataDog/zstd) uses one core, and is a beast (but not Go, yet).
173
174
175## Overall differences.
176
177There appears to be a roughly 5-10% speed advantage over the standard library when comparing at similar compression levels.
178
179The biggest difference you will see is the result of [re-balancing](https://blog.klauspost.com/rebalancing-deflate-compression-levels/) the compression levels. I wanted by library to give a smoother transition between the compression levels than the standard library.
180
181This package attempts to provide a more smooth transition, where "1" is taking a lot of shortcuts, "5" is the reasonable trade-off and "9" is the "give me the best compression", and the values in between gives something reasonable in between. The standard library has big differences in levels 1-4, but levels 5-9 having no significant gains - often spending a lot more time than can be justified by the achieved compression.
182
183There are links to all the test data in the [spreadsheet](https://docs.google.com/spreadsheets/d/1nuNE2nPfuINCZJRMt6wFWhKpToF95I47XjSsc-1rbPQ/edit?usp=sharing) in the top left field on each tab.
184
185## Web Content
186
187This test set aims to emulate typical use in a web server. The test-set is 4GB data in 53k files, and is a mixture of (mostly) HTML, JS, CSS.
188
189Since level 1 and 9 are close to being the same code, they are quite close. But looking at the levels in-between the differences are quite big.
190
191Looking at level 6, this package is 88% faster, but will output about 6% more data. For a web server, this means you can serve 88% more data, but have to pay for 6% more bandwidth. You can draw your own conclusions on what would be the most expensive for your case.
192
193## Object files
194
195This test is for typical data files stored on a server. In this case it is a collection of Go precompiled objects. They are very compressible.
196
197The picture is similar to the web content, but with small differences since this is very compressible. Levels 2-3 offer good speed, but is sacrificing quite a bit of compression.
198
199The standard library seems suboptimal on level 3 and 4 - offering both worse compression and speed than level 6 & 7 of this package respectively.
200
201## Highly Compressible File
202
203This is a JSON file with very high redundancy. The reduction starts at 95% on level 1, so in real life terms we are dealing with something like a highly redundant stream of data, etc.
204
205It is definitely visible that we are dealing with specialized content here, so the results are very scattered. This package does not do very well at levels 1-4, but picks up significantly at level 5 and levels 7 and 8 offering great speed for the achieved compression.
206
207So if you know you content is extremely compressible you might want to go slightly higher than the defaults. The standard library has a huge gap between levels 3 and 4 in terms of speed (2.75x slowdown), so it offers little "middle ground".
208
209## Medium-High Compressible
210
211This is a pretty common test corpus: [enwik9](http://mattmahoney.net/dc/textdata.html). It contains the first 10^9 bytes of the English Wikipedia dump on Mar. 3, 2006. This is a very good test of typical text based compression and more data heavy streams.
212
213We see a similar picture here as in "Web Content". On equal levels some compression is sacrificed for more speed. Level 5 seems to be the best trade-off between speed and size, beating stdlib level 3 in both.
214
215## Medium Compressible
216
217I will combine two test sets, one [10GB file set](http://mattmahoney.net/dc/10gb.html) and a VM disk image (~8GB). Both contain different data types and represent a typical backup scenario.
218
219The most notable thing is how quickly the standard libary drops to very low compression speeds around level 5-6 without any big gains in compression. Since this type of data is fairly common, this does not seem like good behavior.
220
221
222## Un-compressible Content
223
224This is mainly a test of how good the algorithms are at detecting un-compressible input. The standard library only offers this feature with very conservative settings at level 1. Obviously there is no reason for the algorithms to try to compress input that cannot be compressed.  The only downside is that it might skip some compressible data on false detections.
225
226
227# linear time compression (huffman only)
228
229This compression library adds a special compression level, named `HuffmanOnly`, which allows near linear time compression. This is done by completely disabling matching of previous data, and only reduce the number of bits to represent each character.
230
231This means that often used characters, like 'e' and ' ' (space) in text use the fewest bits to represent, and rare characters like '¤' takes more bits to represent. For more information see [wikipedia](https://en.wikipedia.org/wiki/Huffman_coding) or this nice [video](https://youtu.be/ZdooBTdW5bM).
232
233Since this type of compression has much less variance, the compression speed is mostly unaffected by the input data, and is usually more than *180MB/s* for a single core.
234
235The downside is that the compression ratio is usually considerably worse than even the fastest conventional compression. The compression raio can never be better than 8:1 (12.5%).
236
237The linear time compression can be used as a "better than nothing" mode, where you cannot risk the encoder to slow down on some content. For comparison, the size of the "Twain" text is *233460 bytes* (+29% vs. level 1) and encode speed is 144MB/s (4.5x level 1). So in this case you trade a 30% size increase for a 4 times speedup.
238
239For more information see my blog post on [Fast Linear Time Compression](http://blog.klauspost.com/constant-time-gzipzip-compression/).
240
241This is implemented on Go 1.7 as "Huffman Only" mode, though not exposed for gzip.
242
243
244# snappy package
245
246The standard snappy package has now been improved. This repo contains a copy of the snappy repo.
247
248I would advise to use the standard package: https://github.com/golang/snappy
249
250
251# license
252
253This code is licensed under the same conditions as the original Go code. See LICENSE file.
254