Why
cf #5515
After API changes are finished cf apache#2960.
We need to implement ZstdBlobStoreDAO that takes care of the blob compression.
How
- Configuration in
blob.properties:
compression.enabled=true
compression.threshold=16KB
- Implement
ZstdBlobStoreDAO:
The idea is to be fully retro-compatible.
We would, upon S3 save, compress if:
- enabled
- threshold is met
Then we would set metadata onto the compressed object: content-encoding=zstd + content-original-size=...
Upon read:
- if
content-encoding=zstd uncompress, otherwise "server flat"
- Be sure to uncompress on parallel processors.
Use com.github.luben:zstd-jni
CF
byte[] compressed = Zstd.compress(data);
byte[] original = Zstd.decompress(compressed, originalSize);
- In
BlobStoreModulesChooser, if compressed is enabled, we load the ZstdBlobStoreDAO
DoD
- Unit tests:
- implement
BlobStoreDAOContract
- make sure
ZstdBlobStoreDAO can read existing uncompressed blobs (without content-encoding=zstd metadata)
- Simple IT to verify guice binding for
ZstdBlobStoreDAO ?
Why
cf #5515
After API changes are finished cf apache#2960.
We need to implement
ZstdBlobStoreDAOthat takes care of the blob compression.How
blob.properties:ZstdBlobStoreDAO:The idea is to be fully retro-compatible.
We would, upon S3 save, compress if:
Then we would set metadata onto the compressed object:
content-encoding=zstd+content-original-size=...Upon read:
content-encoding=zstduncompress, otherwise "server flat"Use
com.github.luben:zstd-jniCF
BlobStoreModulesChooser, if compressed is enabled, we load theZstdBlobStoreDAODoD
BlobStoreDAOContractZstdBlobStoreDAOcan read existing uncompressed blobs (withoutcontent-encoding=zstdmetadata)ZstdBlobStoreDAO?