Summary: The main motivation for this change is to more flexibly and efficiently support compressing data without extra copies when we do not want to support saving compressed data that is LARGER than the uncompressed. We believe pretty strongly that for the various workloads served by RocksDB, it is well worth a single byte compression marker so that we have the flexibility to save compressed or uncompressed data when compression is attempted. Why? Compression algorithms can add tens of bytes in fixed overheads and percents of bytes in relative overheads. It is also an advantage for the reader when they can bypass decompression, including at least a buffer copy in most cases, after reading just one byte. The block-based table format in RocksDB follows this model with a single-byte compression marker, and at least after https://github.com/facebook/rocksdb/pull/13797 so does CompressedSecondaryCache. (Notably, the blob file format DOES NOT. This is left to follow-up work.) In particular, Compressor::CompressBlock now takes in a fixed size buffer for output rather than a `std::string*`. CompressBlock itself rejects the compression if the output would not fit in the provided buffer. This also works well with `max_compressed_bytes_per_kb` option to reject compression even sooner if its ratio is insufficient (implemented in this change). In the future we might use this functionality to reduce a buffer copy (in many cases) into the WritableFileWriter buffer of the block based table builder. This is a large change because we needed to (or were compelled to) * Update all the existing callers of CompressBlock, sometimes with substantial changes. This includes introducing GrowableBuffer to reuse between calls rather than std::string, which (at least in C++17) requires zeroing out data when allocating/growing a buffer. * Re-implement built-in Compressors (V2; V1 is obsolete) to efficiently implement the new version of the API, no longer wrapping the `OLD_CompressData()` function. The new compressors appropriately leverage the CompressBlock virtual call required for the customization interface and no rely on `switch` on compression type for each block. The implementations are largely adaptations of the old implementations, except * LZ4 and LZ4HC are notably upgraded to take advantage of WorkingArea (see performance tests). And for simplicity in the new implementation, we are dropping support for some super old versions of the library. * Getting snappy to work with limited-size output buffer required using the Sink/Source interfaces, which appear to be well supported for a long time and efficient (see performance tests). * Replace awkward old CompressionManager::GetDecompressorForCompressor with Compressor::GetOptimizedDecompressor (which is optional to implement) * Small behavior change where we treat lack of support for compression closer to not configuring compression, such as incompatibility with block_align. This is motivated by giving CompressionManager the freedom of determining when compression can be excluded for an entire file despite the configured "compression" type, and thus only surfacing actual incompatibilities not hypothetical ones that might be irrelevant to the CompressionManager (or build configuration). Unit tests in `table_test` and `compact_files_test` required update. * Some lingering clean up of CompressedSecondaryCache and a re-optimization made possible by compressing into an existing buffer. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13805 Test Plan: for correctness, existing tests ## Performance Test As I generally only modified compression paths, I'm using a db_bench write benchmark, with before & after configurations running at the same time. vc=1 means verify_compression=1 ``` USE_CLANG=1 DEBUG_LEVEL=0 LIB_MODE=static make -j100 db_bench SUFFIX=`tty | sed 's|/|_|g'`; for CT in zlib bzip2 none snappy zstd lz4 lz4hc none snappy zstd lz4 bzip2; do for VC in 0 1; do echo "$CT vc=$VC"; (for I in `seq 1 20`; do BIN=/dev/shm/dbbench${SUFFIX}.bin; rm -f $BIN; cp db_bench $BIN; $BIN -db=/dev/shm/dbbench$SUFFIX --benchmarks=fillseq -num=10000000 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=1000 -fifo_compaction_allow_compaction=0 -disable_wal -write_buffer_size=12000000 -format_version=7 -compression_type=$CT -verify_compression=$VC 2>&1 | grep micros/op; done) | awk '{n++; sum += $5;} END { print int(sum / n); }'; done; done ``` zlib vc=0 524198 -> 524904 (+0.1%) zlib vc=1 430521 -> 430699 (+0.0%) bzip2 vc=0 61841 -> 60835 (-1.6%) bzip2 vc=1 49232 -> 48734 (-1.0%) none vc=0 1802375 -> 1906227 (+5.8%) none vc=1 1837181 -> 1950308 (+6.2%) snappy vc=0 1783266 -> 1901461 (+6.6%) snappy vc=1 1799703 -> 1879660 (+4.4%) zstd vc=0 1216779 -> 1230507 (+1.1%) zstd vc=1 996370 -> 1015415 (+1.9%) lz4 vc=0 1801473 -> 1943095 (+7.9%) lz4 vc=1 1799155 -> 1935242 (+7.6%) lz4hc vc=0 349719 -> 1126909 (+222.2%) lz4hc vc=1 348099 -> 1108933 (+218.6%) (Repeating the most important ones) none vc=0 1816878 -> 1952221 (+7.4%) none vc=1 1813736 -> 1904622 (+5.0%) snappy vc=0 1794816 -> 1875062 (+4.5%) snappy vc=1 1789363 -> 1873771 (+4.7%) zstd vc=0 1202592 -> 1225164 (+1.9%) zstd vc=1 994322 -> 1016688 (+2.2%) lz4 vc=0 1786959 -> 1971518 (+10.3%) lz4 vc=1 1829483 -> 1935871 (+5.8%) I confirmed manually that the new WorkingArea for LZ4HC makes the huge difference on that one, but not as much difference for LZ4, presumably because LZ4HC uses much larger buffers/structures/whatever for better compression ratios. Reviewed By: hx235 Differential Revision: D79111736 Pulled By: pdillinger fbshipit-source-id: 1ce1b14af9f15365f1b6da49906b5073a8cecc14
36 lines
1.3 KiB
C++
36 lines
1.3 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
//
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
#pragma once
|
|
|
|
#include <string>
|
|
|
|
#include "rocksdb/rocksdb_namespace.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
namespace port {
|
|
namespace xpress {
|
|
|
|
bool Compress(const char* input, size_t length, std::string* output);
|
|
|
|
// Returns written size or 0 on failure including if buffer is too small.
|
|
size_t CompressWithMaxSize(const char* input, size_t length, char* output,
|
|
size_t max_output_size);
|
|
|
|
char* Decompress(const char* input_data, size_t input_length,
|
|
size_t* uncompressed_size);
|
|
|
|
int64_t GetDecompressedSize(const char* input, size_t input_length);
|
|
|
|
int64_t DecompressToBuffer(const char* input, size_t input_length, char* output,
|
|
size_t output_length);
|
|
|
|
} // namespace xpress
|
|
} // namespace port
|
|
} // namespace ROCKSDB_NAMESPACE
|