You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This adds zstd compression support.
The current two options, zlib and fastlz is basically a choice between performance and compression ratio.
You would choose zlib if you are memory-bound and fastlz if you are cpu-bound. With zstd, you get the
performance of fastlz with the compression of zlib. And often it wins on both. See this benchmark I ran
on json files of varying sizes: https://gist.github.com/rlerdorf/788f3d0144f9c5514d8fee9477cbe787
Taking just a 40k json blob, we see that zstd at compression level 3 reduces it to 8862 bytes. Our current
zlib 1 gets worse compression at 10091 bytes and takes longer both to compress and decompress.
C Size ratio% C MB/s D MB/s SCORE Name File
8037 19.9 0.58 2130.89 0.08 zstd 22 file-39.54k-json
8204 20.3 31.85 2381.59 0.01 zstd 10 file-39.54k-json
8371 20.7 47.52 547.12 0.01 zlib 9 file-39.54k-json
8477 20.9 74.84 539.83 0.01 zlib 6 file-39.54k-json
8862 21.9 449.86 2130.89 0.01 zstd 3 file-39.54k-json
9171 22.7 554.62 2381.59 0.01 zstd 1 file-39.54k-json
10091 24.9 153.94 481.99 0.01 zlib 1 file-39.54k-json
10646 26.3 43.39 8097.40 0.01 lz4 16 file-39.54k-json
10658 26.3 72.30 8097.40 0.01 lz4 10 file-39.54k-json
13004 32.1 1396.10 6747.83 0.01 lz4 1 file-39.54k-json
13321 32.9 440.08 1306.03 0.01 fastlz 2 file-39.54k-json
14807 36.6 444.91 1156.77 0.01 fastlz 1 file-39.54k-json
15517 38.3 1190.79 4048.70 0.02 zstd -10 file-39.54k-json
The fact that decompression a dramatically faster with zstd is a win for most common memcache uses
since they tend to be read-heavy. The PR also adds a `memcache.compression_level` INI switch which
currently only applies to zstd compression. It could probably be made to also apply to zlib and fastlz.
0 commit comments