3/17/2023 0 Comments Openzfs video compressionZFS Volume: export/Server_Backups/pgsql-cluster-svr2įor other types of files (ISOs, already compressed files, etc), the compression ratio seemed relatively equal.Īgain, just wondering if anyone else noticed this behavior. ZFS Volume: export/Config_Backups (text files) In some case, we saw a +30% reduction of "real" space used. Specifically, any type of DB file (MySQL, PGSQL) and other text-type files seemed to compress much better. Once the data was restored, we noticed the compression stats on the volumes were much higher than before. Once the data sets were created, we copied our data back. This time, we chose zstd compression instead of lz4. In order to recreate the same pool/volume layout, we dumped all the ZFS details to a text file prior to the upgrade.ĭuring the upgrade process, we copied all the data to another backup server, created a new, single RAID-2Z setup (8x 16TB drives - ashift 12), recreated the same data sets, and set 1MB record size for all data sets. From there, we created a bunch of data sets - each with 1MB record size and lz4 compression. The server had 2x RAIDZ-1 pools - each with 4x 16TB drives (ashift=12). We have a Supermicro server with 8x 16T drives running Debian 10 and and OpenZFS 0.8.6. Wondering if anyone else has noticed the same behavior. After the upgrade, we are noticing much higher compression ratios when switching from lz4 to zstd. It would be good to discuss this with other OpenZFS members, if the idea makes sense at all.īasically, this allows us to postpone creating new feature flag for some time (giving more time for discussion), while allowing to enjoy lz4hc without it.Last week we decided to upgrade one of our backup servers from OpenZFS 0.8.6 to OpenZFS 2.0.3. allow import of a pool with "legacy" ``compression=gzip-N set, but update tocompression=gzip compressionlevel=N` property when the pool is upgraded). In the face of this downside, perhaps it would make sense to deprecate gzip-N in some future version and at that time issue a new feature flag (i.e. One downside of this proposal is that we will have to live, for some time, with duplication in gzip compression levels (which I explain at the start). "levels", identified by number specific to format) gzip, lz4 etc) but written by different algorithms (i.e. you can have data with compatible compression format (e.g. The fundamental property of this model is that it decouples compression algorithm (called "level") from compression format, i.e. anything that is not gzip or lz4), this property would be ignored, thus allowing for compression override in parts of filesystem without the necessity to update compressionlevel įor lz4, let the compressionlevel alone decide compression level and style: 0 is lz4, anything greater than 0 is lz4hc (for example, you might set compression=lz4 compressionlevel=9 to get strong lz4hc compression or compression=lz4 compressionlevel=0 to get the default, fastest compression available to ZFS)įor compression formats which do not support choice levels (ie. If compression=gzip compressionlevel=N is set, make it valid and equivalent to compression=gzip-N. If compression=gzip-N is set, enforce that compressionlevel is either absent (older pools) or set to the same N (we might also agree on different reconciliation rule, e.g. Actual semantics and validity of compressionlevel will be determined by compression format in context of which it is set. It is new default compression algorithm (compressionon) in OpenZFS 1, but not all platforms have adopted the commit changing it yet. It is significantly superior to LZJB in all metrics tested. Here is an idea : introduce a new property compressionlevel and make it apply both to gzip and lz4 compression. The following compression algorithms are available: LZ4 New algorithm added after feature flags were created. Reply to this email directly or view it on GitHub /issues/1900#issuecomment-29263450 Task though, it may be worth it for the testing alone. No expected return then feel free to close. It seemed like it would be a good thing, but if it is a lot of effort for I think the WORM cases provide enough merit to undertake this feature,Īssuming it's not difficult to accomplish. Image, setting compression to lz4, then cloning to create a liveīecause the on disk format is binary compatible, there's no reason toĭifferentiate between lz4 and lz4hc when storing data to disk and every I envision setting compression=lz4hc, writing out the basic data of an Virtual machine distribution, vm master images, and long term data This would be useful for a number of situations. Lz4hc does more work on compression to create a smaller, completely binaryĬompatible, compressed chunk that decompressed even faster than lz4.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |