11. Compression algorithm (deflate) 2 3The deflation algorithm used by gzip (also zip and zlib) is a variation of 4LZ77 (Lempel-Ziv 1977, see reference below). It finds duplicated strings in 5the input data. The second occurrence of a string is replaced by a 6pointer to the previous string, in the form of a pair (distance, 7length). Distances are limited to 32K bytes, and lengths are limited 8to 258 bytes. When a string does not occur anywhere in the previous 932K bytes, it is emitted as a sequence of literal bytes. (In this 10description, `string' must be taken as an arbitrary sequence of bytes, 11and is not restricted to printable characters.) 12 13Literals or match lengths are compressed with one Huffman tree, and 14match distances are compressed with another tree. The trees are stored 15in a compact form at the start of each block. The blocks can have any 16size (except that the compressed data for one block must fit in 17available memory). A block is terminated when deflate() determines that 18it would be useful to start another block with fresh trees. (This is 19somewhat similar to the behavior of LZW-based _compress_.) 20 21Duplicated strings are found using a hash table. All input strings of 22length 3 are inserted in the hash table. A hash index is computed for 23the next 3 bytes. If the hash chain for this index is not empty, all 24strings in the chain are compared with the current input string, and 25the longest match is selected. 26 27The hash chains are searched starting with the most recent strings, to 28favor small distances and thus take advantage of the Huffman encoding. 29The hash chains are singly linked. There are no deletions from the 30hash chains, the algorithm simply discards matches that are too old. 31 32To avoid a worst-case situation, very long hash chains are arbitrarily 33truncated at a certain length, determined by a runtime option (level 34parameter of deflateInit). So deflate() does not always find the longest 35possible match but generally finds a match which is long enough. 36 37deflate() also defers the selection of matches with a lazy evaluation 38mechanism. After a match of length N has been found, deflate() searches for 39a longer match at the next input byte. If a longer match is found, the 40previous match is truncated to a length of one (thus producing a single 41literal byte) and the process of lazy evaluation begins again. Otherwise, 42the original match is kept, and the next match search is attempted only N 43steps later. 44 45The lazy match evaluation is also subject to a runtime parameter. If 46the current match is long enough, deflate() reduces the search for a longer 47match, thus speeding up the whole process. If compression ratio is more 48important than speed, deflate() attempts a complete second search even if 49the first match is already long enough. 50 51The lazy match evaluation is not performed for the fastest compression 52modes (level parameter 1 to 3). For these fast modes, new strings 53are inserted in the hash table only when no match was found, or 54when the match is not too long. This degrades the compression ratio 55but saves time since there are both fewer insertions and fewer searches. 56 57 582. Decompression algorithm (inflate) 59 602.1 Introduction 61 62The key question is how to represent a Huffman code (or any prefix code) so 63that you can decode fast. The most important characteristic is that shorter 64codes are much more common than longer codes, so pay attention to decoding the 65short codes fast, and let the long codes take longer to decode. 66 67inflate() sets up a first level table that covers some number of bits of 68input less than the length of longest code. It gets that many bits from the 69stream, and looks it up in the table. The table will tell if the next 70code is that many bits or less and how many, and if it is, it will tell 71the value, else it will point to the next level table for which inflate() 72grabs more bits and tries to decode a longer code. 73 74How many bits to make the first lookup is a tradeoff between the time it 75takes to decode and the time it takes to build the table. If building the 76table took no time (and if you had infinite memory), then there would only 77be a first level table to cover all the way to the longest code. However, 78building the table ends up taking a lot longer for more bits since short 79codes are replicated many times in such a table. What inflate() does is 80simply to make the number of bits in the first table a variable, and then 81to set that variable for the maximum speed. 82 83For inflate, which has 286 possible codes for the literal/length tree, the size 84of the first table is nine bits. Also the distance trees have 30 possible 85values, and the size of the first table is six bits. Note that for each of 86those cases, the table ended up one bit longer than the ``average'' code 87length, i.e. the code length of an approximately flat code which would be a 88little more than eight bits for 286 symbols and a little less than five bits 89for 30 symbols. 90 91 922.2 More details on the inflate table lookup 93 94Ok, you want to know what this cleverly obfuscated inflate tree actually 95looks like. You are correct that it's not a Huffman tree. It is simply a 96lookup table for the first, let's say, nine bits of a Huffman symbol. The 97symbol could be as short as one bit or as long as 15 bits. If a particular 98symbol is shorter than nine bits, then that symbol's translation is duplicated 99in all those entries that start with that symbol's bits. For example, if the 100symbol is four bits, then it's duplicated 32 times in a nine-bit table. If a 101symbol is nine bits long, it appears in the table once. 102 103If the symbol is longer than nine bits, then that entry in the table points 104to another similar table for the remaining bits. Again, there are duplicated 105entries as needed. The idea is that most of the time the symbol will be short 106and there will only be one table look up. (That's whole idea behind data 107compression in the first place.) For the less frequent long symbols, there 108will be two lookups. If you had a compression method with really long 109symbols, you could have as many levels of lookups as is efficient. For 110inflate, two is enough. 111 112So a table entry either points to another table (in which case nine bits in 113the above example are gobbled), or it contains the translation for the symbol 114and the number of bits to gobble. Then you start again with the next 115ungobbled bit. 116 117You may wonder: why not just have one lookup table for how ever many bits the 118longest symbol is? The reason is that if you do that, you end up spending 119more time filling in duplicate symbol entries than you do actually decoding. 120At least for deflate's output that generates new trees every several 10's of 121kbytes. You can imagine that filling in a 2^15 entry table for a 15-bit code 122would take too long if you're only decoding several thousand symbols. At the 123other extreme, you could make a new table for every bit in the code. In fact, 124that's essentially a Huffman tree. But then you spend two much time 125traversing the tree while decoding, even for short symbols. 126 127So the number of bits for the first lookup table is a trade of the time to 128fill out the table vs. the time spent looking at the second level and above of 129the table. 130 131Here is an example, scaled down: 132 133The code being decoded, with 10 symbols, from 1 to 6 bits long: 134 135A: 0 136B: 10 137C: 1100 138D: 11010 139E: 11011 140F: 11100 141G: 11101 142H: 11110 143I: 111110 144J: 111111 145 146Let's make the first table three bits long (eight entries): 147 148000: A,1 149001: A,1 150010: A,1 151011: A,1 152100: B,2 153101: B,2 154110: -> table X (gobble 3 bits) 155111: -> table Y (gobble 3 bits) 156 157Each entry is what the bits decode as and how many bits that is, i.e. how 158many bits to gobble. Or the entry points to another table, with the number of 159bits to gobble implicit in the size of the table. 160 161Table X is two bits long since the longest code starting with 110 is five bits 162long: 163 16400: C,1 16501: C,1 16610: D,2 16711: E,2 168 169Table Y is three bits long since the longest code starting with 111 is six 170bits long: 171 172000: F,2 173001: F,2 174010: G,2 175011: G,2 176100: H,2 177101: H,2 178110: I,3 179111: J,3 180 181So what we have here are three tables with a total of 20 entries that had to 182be constructed. That's compared to 64 entries for a single table. Or 183compared to 16 entries for a Huffman tree (six two entry tables and one four 184entry table). Assuming that the code ideally represents the probability of 185the symbols, it takes on the average 1.25 lookups per symbol. That's compared 186to one lookup for the single table, or 1.66 lookups per symbol for the 187Huffman tree. 188 189There, I think that gives you a picture of what's going on. For inflate, the 190meaning of a particular symbol is often more than just a letter. It can be a 191byte (a "literal"), or it can be either a length or a distance which 192indicates a base value and a number of bits to fetch after the code that is 193added to the base value. Or it might be the special end-of-block code. The 194data structures created in inftrees.c try to encode all that information 195compactly in the tables. 196 197 198Jean-loup Gailly Mark Adler 199jloup@gzip.org madler@alumni.caltech.edu 200 201 202References: 203 204[LZ77] Ziv J., Lempel A., ``A Universal Algorithm for Sequential Data 205Compression,'' IEEE Transactions on Information Theory, Vol. 23, No. 3, 206pp. 337-343. 207 208``DEFLATE Compressed Data Format Specification'' available in 209http://www.ietf.org/rfc/rfc1951.txt 2101. Compression algorithm (deflate) 211 212The deflation algorithm used by gzip (also zip and zlib) is a variation of 213LZ77 (Lempel-Ziv 1977, see reference below). It finds duplicated strings in 214the input data. The second occurrence of a string is replaced by a 215pointer to the previous string, in the form of a pair (distance, 216length). Distances are limited to 32K bytes, and lengths are limited 217to 258 bytes. When a string does not occur anywhere in the previous 21832K bytes, it is emitted as a sequence of literal bytes. (In this 219description, `string' must be taken as an arbitrary sequence of bytes, 220and is not restricted to printable characters.) 221 222Literals or match lengths are compressed with one Huffman tree, and 223match distances are compressed with another tree. The trees are stored 224in a compact form at the start of each block. The blocks can have any 225size (except that the compressed data for one block must fit in 226available memory). A block is terminated when deflate() determines that 227it would be useful to start another block with fresh trees. (This is 228somewhat similar to the behavior of LZW-based _compress_.) 229 230Duplicated strings are found using a hash table. All input strings of 231length 3 are inserted in the hash table. A hash index is computed for 232the next 3 bytes. If the hash chain for this index is not empty, all 233strings in the chain are compared with the current input string, and 234the longest match is selected. 235 236The hash chains are searched starting with the most recent strings, to 237favor small distances and thus take advantage of the Huffman encoding. 238The hash chains are singly linked. There are no deletions from the 239hash chains, the algorithm simply discards matches that are too old. 240 241To avoid a worst-case situation, very long hash chains are arbitrarily 242truncated at a certain length, determined by a runtime option (level 243parameter of deflateInit). So deflate() does not always find the longest 244possible match but generally finds a match which is long enough. 245 246deflate() also defers the selection of matches with a lazy evaluation 247mechanism. After a match of length N has been found, deflate() searches for 248a longer match at the next input byte. If a longer match is found, the 249previous match is truncated to a length of one (thus producing a single 250literal byte) and the process of lazy evaluation begins again. Otherwise, 251the original match is kept, and the next match search is attempted only N 252steps later. 253 254The lazy match evaluation is also subject to a runtime parameter. If 255the current match is long enough, deflate() reduces the search for a longer 256match, thus speeding up the whole process. If compression ratio is more 257important than speed, deflate() attempts a complete second search even if 258the first match is already long enough. 259 260The lazy match evaluation is not performed for the fastest compression 261modes (level parameter 1 to 3). For these fast modes, new strings 262are inserted in the hash table only when no match was found, or 263when the match is not too long. This degrades the compression ratio 264but saves time since there are both fewer insertions and fewer searches. 265 266 2672. Decompression algorithm (inflate) 268 2692.1 Introduction 270 271The key question is how to represent a Huffman code (or any prefix code) so 272that you can decode fast. The most important characteristic is that shorter 273codes are much more common than longer codes, so pay attention to decoding the 274short codes fast, and let the long codes take longer to decode. 275 276inflate() sets up a first level table that covers some number of bits of 277input less than the length of longest code. It gets that many bits from the 278stream, and looks it up in the table. The table will tell if the next 279code is that many bits or less and how many, and if it is, it will tell 280the value, else it will point to the next level table for which inflate() 281grabs more bits and tries to decode a longer code. 282 283How many bits to make the first lookup is a tradeoff between the time it 284takes to decode and the time it takes to build the table. If building the 285table took no time (and if you had infinite memory), then there would only 286be a first level table to cover all the way to the longest code. However, 287building the table ends up taking a lot longer for more bits since short 288codes are replicated many times in such a table. What inflate() does is 289simply to make the number of bits in the first table a variable, and then 290to set that variable for the maximum speed. 291 292For inflate, which has 286 possible codes for the literal/length tree, the size 293of the first table is nine bits. Also the distance trees have 30 possible 294values, and the size of the first table is six bits. Note that for each of 295those cases, the table ended up one bit longer than the ``average'' code 296length, i.e. the code length of an approximately flat code which would be a 297little more than eight bits for 286 symbols and a little less than five bits 298for 30 symbols. 299 300 3012.2 More details on the inflate table lookup 302 303Ok, you want to know what this cleverly obfuscated inflate tree actually 304looks like. You are correct that it's not a Huffman tree. It is simply a 305lookup table for the first, let's say, nine bits of a Huffman symbol. The 306symbol could be as short as one bit or as long as 15 bits. If a particular 307symbol is shorter than nine bits, then that symbol's translation is duplicated 308in all those entries that start with that symbol's bits. For example, if the 309symbol is four bits, then it's duplicated 32 times in a nine-bit table. If a 310symbol is nine bits long, it appears in the table once. 311 312If the symbol is longer than nine bits, then that entry in the table points 313to another similar table for the remaining bits. Again, there are duplicated 314entries as needed. The idea is that most of the time the symbol will be short 315and there will only be one table look up. (That's whole idea behind data 316compression in the first place.) For the less frequent long symbols, there 317will be two lookups. If you had a compression method with really long 318symbols, you could have as many levels of lookups as is efficient. For 319inflate, two is enough. 320 321So a table entry either points to another table (in which case nine bits in 322the above example are gobbled), or it contains the translation for the symbol 323and the number of bits to gobble. Then you start again with the next 324ungobbled bit. 325 326You may wonder: why not just have one lookup table for how ever many bits the 327longest symbol is? The reason is that if you do that, you end up spending 328more time filling in duplicate symbol entries than you do actually decoding. 329At least for deflate's output that generates new trees every several 10's of 330kbytes. You can imagine that filling in a 2^15 entry table for a 15-bit code 331would take too long if you're only decoding several thousand symbols. At the 332other extreme, you could make a new table for every bit in the code. In fact, 333that's essentially a Huffman tree. But then you spend two much time 334traversing the tree while decoding, even for short symbols. 335 336So the number of bits for the first lookup table is a trade of the time to 337fill out the table vs. the time spent looking at the second level and above of 338the table. 339 340Here is an example, scaled down: 341 342The code being decoded, with 10 symbols, from 1 to 6 bits long: 343 344A: 0 345B: 10 346C: 1100 347D: 11010 348E: 11011 349F: 11100 350G: 11101 351H: 11110 352I: 111110 353J: 111111 354 355Let's make the first table three bits long (eight entries): 356 357000: A,1 358001: A,1 359010: A,1 360011: A,1 361100: B,2 362101: B,2 363110: -> table X (gobble 3 bits) 364111: -> table Y (gobble 3 bits) 365 366Each entry is what the bits decode as and how many bits that is, i.e. how 367many bits to gobble. Or the entry points to another table, with the number of 368bits to gobble implicit in the size of the table. 369 370Table X is two bits long since the longest code starting with 110 is five bits 371long: 372 37300: C,1 37401: C,1 37510: D,2 37611: E,2 377 378Table Y is three bits long since the longest code starting with 111 is six 379bits long: 380 381000: F,2 382001: F,2 383010: G,2 384011: G,2 385100: H,2 386101: H,2 387110: I,3 388111: J,3 389 390So what we have here are three tables with a total of 20 entries that had to 391be constructed. That's compared to 64 entries for a single table. Or 392compared to 16 entries for a Huffman tree (six two entry tables and one four 393entry table). Assuming that the code ideally represents the probability of 394the symbols, it takes on the average 1.25 lookups per symbol. That's compared 395to one lookup for the single table, or 1.66 lookups per symbol for the 396Huffman tree. 397 398There, I think that gives you a picture of what's going on. For inflate, the 399meaning of a particular symbol is often more than just a letter. It can be a 400byte (a "literal"), or it can be either a length or a distance which 401indicates a base value and a number of bits to fetch after the code that is 402added to the base value. Or it might be the special end-of-block code. The 403data structures created in inftrees.c try to encode all that information 404compactly in the tables. 405 406 407Jean-loup Gailly Mark Adler 408jloup@gzip.org madler@alumni.caltech.edu 409 410 411References: 412 413[LZ77] Ziv J., Lempel A., ``A Universal Algorithm for Sequential Data 414Compression,'' IEEE Transactions on Information Theory, Vol. 23, No. 3, 415pp. 337-343. 416 417``DEFLATE Compressed Data Format Specification'' available in 418http://www.ietf.org/rfc/rfc1951.txt 4191. Compression algorithm (deflate) 420 421The deflation algorithm used by gzip (also zip and zlib) is a variation of 422LZ77 (Lempel-Ziv 1977, see reference below). It finds duplicated strings in 423the input data. The second occurrence of a string is replaced by a 424pointer to the previous string, in the form of a pair (distance, 425length). Distances are limited to 32K bytes, and lengths are limited 426to 258 bytes. When a string does not occur anywhere in the previous 42732K bytes, it is emitted as a sequence of literal bytes. (In this 428description, `string' must be taken as an arbitrary sequence of bytes, 429and is not restricted to printable characters.) 430 431Literals or match lengths are compressed with one Huffman tree, and 432match distances are compressed with another tree. The trees are stored 433in a compact form at the start of each block. The blocks can have any 434size (except that the compressed data for one block must fit in 435available memory). A block is terminated when deflate() determines that 436it would be useful to start another block with fresh trees. (This is 437somewhat similar to the behavior of LZW-based _compress_.) 438 439Duplicated strings are found using a hash table. All input strings of 440length 3 are inserted in the hash table. A hash index is computed for 441the next 3 bytes. If the hash chain for this index is not empty, all 442strings in the chain are compared with the current input string, and 443the longest match is selected. 444 445The hash chains are searched starting with the most recent strings, to 446favor small distances and thus take advantage of the Huffman encoding. 447The hash chains are singly linked. There are no deletions from the 448hash chains, the algorithm simply discards matches that are too old. 449 450To avoid a worst-case situation, very long hash chains are arbitrarily 451truncated at a certain length, determined by a runtime option (level 452parameter of deflateInit). So deflate() does not always find the longest 453possible match but generally finds a match which is long enough. 454 455deflate() also defers the selection of matches with a lazy evaluation 456mechanism. After a match of length N has been found, deflate() searches for 457a longer match at the next input byte. If a longer match is found, the 458previous match is truncated to a length of one (thus producing a single 459literal byte) and the process of lazy evaluation begins again. Otherwise, 460the original match is kept, and the next match search is attempted only N 461steps later. 462 463The lazy match evaluation is also subject to a runtime parameter. If 464the current match is long enough, deflate() reduces the search for a longer 465match, thus speeding up the whole process. If compression ratio is more 466important than speed, deflate() attempts a complete second search even if 467the first match is already long enough. 468 469The lazy match evaluation is not performed for the fastest compression 470modes (level parameter 1 to 3). For these fast modes, new strings 471are inserted in the hash table only when no match was found, or 472when the match is not too long. This degrades the compression ratio 473but saves time since there are both fewer insertions and fewer searches. 474 475 4762. Decompression algorithm (inflate) 477 4782.1 Introduction 479 480The key question is how to represent a Huffman code (or any prefix code) so 481that you can decode fast. The most important characteristic is that shorter 482codes are much more common than longer codes, so pay attention to decoding the 483short codes fast, and let the long codes take longer to decode. 484 485inflate() sets up a first level table that covers some number of bits of 486input less than the length of longest code. It gets that many bits from the 487stream, and looks it up in the table. The table will tell if the next 488code is that many bits or less and how many, and if it is, it will tell 489the value, else it will point to the next level table for which inflate() 490grabs more bits and tries to decode a longer code. 491 492How many bits to make the first lookup is a tradeoff between the time it 493takes to decode and the time it takes to build the table. If building the 494table took no time (and if you had infinite memory), then there would only 495be a first level table to cover all the way to the longest code. However, 496building the table ends up taking a lot longer for more bits since short 497codes are replicated many times in such a table. What inflate() does is 498simply to make the number of bits in the first table a variable, and then 499to set that variable for the maximum speed. 500 501For inflate, which has 286 possible codes for the literal/length tree, the size 502of the first table is nine bits. Also the distance trees have 30 possible 503values, and the size of the first table is six bits. Note that for each of 504those cases, the table ended up one bit longer than the ``average'' code 505length, i.e. the code length of an approximately flat code which would be a 506little more than eight bits for 286 symbols and a little less than five bits 507for 30 symbols. 508 509 5102.2 More details on the inflate table lookup 511 512Ok, you want to know what this cleverly obfuscated inflate tree actually 513looks like. You are correct that it's not a Huffman tree. It is simply a 514lookup table for the first, let's say, nine bits of a Huffman symbol. The 515symbol could be as short as one bit or as long as 15 bits. If a particular 516symbol is shorter than nine bits, then that symbol's translation is duplicated 517in all those entries that start with that symbol's bits. For example, if the 518symbol is four bits, then it's duplicated 32 times in a nine-bit table. If a 519symbol is nine bits long, it appears in the table once. 520 521If the symbol is longer than nine bits, then that entry in the table points 522to another similar table for the remaining bits. Again, there are duplicated 523entries as needed. The idea is that most of the time the symbol will be short 524and there will only be one table look up. (That's whole idea behind data 525compression in the first place.) For the less frequent long symbols, there 526will be two lookups. If you had a compression method with really long 527symbols, you could have as many levels of lookups as is efficient. For 528inflate, two is enough. 529 530So a table entry either points to another table (in which case nine bits in 531the above example are gobbled), or it contains the translation for the symbol 532and the number of bits to gobble. Then you start again with the next 533ungobbled bit. 534 535You may wonder: why not just have one lookup table for how ever many bits the 536longest symbol is? The reason is that if you do that, you end up spending 537more time filling in duplicate symbol entries than you do actually decoding. 538At least for deflate's output that generates new trees every several 10's of 539kbytes. You can imagine that filling in a 2^15 entry table for a 15-bit code 540would take too long if you're only decoding several thousand symbols. At the 541other extreme, you could make a new table for every bit in the code. In fact, 542that's essentially a Huffman tree. But then you spend two much time 543traversing the tree while decoding, even for short symbols. 544 545So the number of bits for the first lookup table is a trade of the time to 546fill out the table vs. the time spent looking at the second level and above of 547the table. 548 549Here is an example, scaled down: 550 551The code being decoded, with 10 symbols, from 1 to 6 bits long: 552 553A: 0 554B: 10 555C: 1100 556D: 11010 557E: 11011 558F: 11100 559G: 11101 560H: 11110 561I: 111110 562J: 111111 563 564Let's make the first table three bits long (eight entries): 565 566000: A,1 567001: A,1 568010: A,1 569011: A,1 570100: B,2 571101: B,2 572110: -> table X (gobble 3 bits) 573111: -> table Y (gobble 3 bits) 574 575Each entry is what the bits decode as and how many bits that is, i.e. how 576many bits to gobble. Or the entry points to another table, with the number of 577bits to gobble implicit in the size of the table. 578 579Table X is two bits long since the longest code starting with 110 is five bits 580long: 581 58200: C,1 58301: C,1 58410: D,2 58511: E,2 586 587Table Y is three bits long since the longest code starting with 111 is six 588bits long: 589 590000: F,2 591001: F,2 592010: G,2 593011: G,2 594100: H,2 595101: H,2 596110: I,3 597111: J,3 598 599So what we have here are three tables with a total of 20 entries that had to 600be constructed. That's compared to 64 entries for a single table. Or 601compared to 16 entries for a Huffman tree (six two entry tables and one four 602entry table). Assuming that the code ideally represents the probability of 603the symbols, it takes on the average 1.25 lookups per symbol. That's compared 604to one lookup for the single table, or 1.66 lookups per symbol for the 605Huffman tree. 606 607There, I think that gives you a picture of what's going on. For inflate, the 608meaning of a particular symbol is often more than just a letter. It can be a 609byte (a "literal"), or it can be either a length or a distance which 610indicates a base value and a number of bits to fetch after the code that is 611added to the base value. Or it might be the special end-of-block code. The 612data structures created in inftrees.c try to encode all that information 613compactly in the tables. 614 615 616Jean-loup Gailly Mark Adler 617jloup@gzip.org madler@alumni.caltech.edu 618 619 620References: 621 622[LZ77] Ziv J., Lempel A., ``A Universal Algorithm for Sequential Data 623Compression,'' IEEE Transactions on Information Theory, Vol. 23, No. 3, 624pp. 337-343. 625 626``DEFLATE Compressed Data Format Specification'' available in 627http://www.ietf.org/rfc/rfc1951.txt 628