rocksdb/utilities/blob_db/blob_dump_tool.cc
Peter Dillinger 7c9b580681 Big refactor for preliminary custom compression API (#13540)
Summary:
Adds new classes etc. in internal compression.h that are intended to become public APIs for supporting custom/pluggable compression. Some steps remain to allow for pluggable compression and to remove a lot of legacy code (e.g. now called `OLD_CompressData` and `OLD_UncompressData`), but this change refactors the key integration points of SST building and reading and compressed secondary cache over to the new APIs.

Compared with the proposed https://github.com/facebook/rocksdb/issues/7650, this fixes a number of issues including
* Making a clean divide between public and internal APIs (currently just indicated with comments)
* Enough generality that built-in compressions generally fit into the framework rather than needing special treatment
* Avoid exposing obnoxious idioms like `compress_format_version` to the user.
* Enough generality that a compressor mixing algorithms/strategies from other compressors is pretty well supported without an extra schema layer
* Explicit thread-safety contracts (carefully considered)
* Contract details around schema compatibility and extension with code changes (more detail in next PR)
* Customizable "working areas" (e.g. for ZSTD "context")
* Decompression into an arbitrary memory location (rather than involving the decompressor in memory allocation; should facilitate reducing number of objects in block cache)

Pull Request resolved: https://github.com/facebook/rocksdb/pull/13540

Test Plan:
This is currently an internal refactor. More testing will come when the new API is migrated to the public API. A test in db_block_cache_test is updated to meaningfully cover a case (cache warming compression dictionary block) that was previously only covered in the crash test.

SST write performance test, like https://github.com/facebook/rocksdb/issues/13583. Compile with CLANG, run before & after simultaneously:

```
SUFFIX=`tty | sed 's|/|_|g'`; for ARGS in "-compression_parallel_threads=1 -compression_type=none" "-compression_parallel_threads=1 -compression_type=snappy" "-compression_parallel_threads=1 -compression_type=zstd" "-compression_parallel_threads=1 -compression_type=zstd -verify_compression=1" "-compression_parallel_threads=1 -compression_type=zstd -compression_max_dict_bytes=8180" "-compression_parallel_threads=4 -compression_type=snappy"; do echo $ARGS; (for I in `seq 1 20`; do ./db_bench -db=/dev/shm/dbbench$SUFFIX --benchmarks=fillseq -num=10000000 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=1000 -fifo_compaction_allow_compaction=0 -disable_wal -write_buffer_size=12000000 $ARGS 2>&1 | grep micros/op; done) | awk '{n++; sum += $5;} END { print int(sum / n); }'; done
```

Before (this PR and with https://github.com/facebook/rocksdb/issues/13583 reverted):
-compression_parallel_threads=1 -compression_type=none
1908372
-compression_parallel_threads=1 -compression_type=snappy
1926093
-compression_parallel_threads=1 -compression_type=zstd
1208259
-compression_parallel_threads=1 -compression_type=zstd -verify_compression=1
997583
-compression_parallel_threads=1 -compression_type=zstd -compression_max_dict_bytes=8180
934246
-compression_parallel_threads=4 -compression_type=snappy
1644849

After:
-compression_parallel_threads=1 -compression_type=none
1956054 (+2.5%)
-compression_parallel_threads=1 -compression_type=snappy
1911433 (-0.8%)
-compression_parallel_threads=1 -compression_type=zstd
1205668 (-0.3%)
-compression_parallel_threads=1 -compression_type=zstd -verify_compression=1
999263 (+0.2%)
-compression_parallel_threads=1 -compression_type=zstd -compression_max_dict_bytes=8180
934322 (+0.0%)
-compression_parallel_threads=4 -compression_type=snappy
1642519 (-0.2%)

Pretty neutral change(s) overall.

SST read performance test (related to https://github.com/facebook/rocksdb/issues/13583). Set up:
```
for COMP in none snappy zstd; do echo $ARGS; ./db_bench -db=/dev/shm/dbbench-$COMP --benchmarks=fillseq,flush -num=10000000 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=1000 -fifo_compaction_allow_compaction=0 -disable_wal -write_buffer_size=12000000 -compression_type=$COMP; done
```
Test (compile with CLANG, run before & after simultaneously):
```
for COMP in none snappy zstd; do echo $COMP; (for I in `seq 1 5`; do ./db_bench -readonly -db=/dev/shm/dbbench-$COMP --benchmarks=readrandom -num=10000000 -duration=20 -threads=8 2>&1 | grep micros/op; done) | awk '{n++; sum += $5;} END { print int(sum / n); }'; done
```

Before (this PR and with https://github.com/facebook/rocksdb/issues/13583 reverted):
none
1495646
snappy
1172443
zstd
706036
zstd (after constructing with -compression_max_dict_bytes=8180)
656182

After:
none
1494981 (-0.0%)
snappy
1171846 (-0.1%)
zstd
696363 (-1.4%)
zstd (after constructing with -compression_max_dict_bytes=8180)
667585 (+1.7%)

Pretty neutral.

Reviewed By: hx235

Differential Revision: D74626863

Pulled By: pdillinger

fbshipit-source-id: dc8ff3178da9b4eaa7c16aa1bb910c872afaf14a
2025-05-15 17:14:23 -07:00

277 lines
9.1 KiB
C++

// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
#include "utilities/blob_db/blob_dump_tool.h"
#include <cinttypes>
#include <cstdio>
#include <iostream>
#include <memory>
#include <string>
#include "file/random_access_file_reader.h"
#include "file/readahead_raf.h"
#include "port/port.h"
#include "rocksdb/convenience.h"
#include "rocksdb/file_system.h"
#include "table/format.h"
#include "util/coding.h"
#include "util/string_util.h"
#include "utilities/blob_db/blob_db_impl.h"
namespace ROCKSDB_NAMESPACE::blob_db {
BlobDumpTool::BlobDumpTool()
: reader_(nullptr), buffer_(nullptr), buffer_size_(0) {}
Status BlobDumpTool::Run(const std::string& filename, DisplayType show_key,
DisplayType show_blob,
DisplayType show_uncompressed_blob,
bool show_summary) {
constexpr size_t kReadaheadSize = 2 * 1024 * 1024;
Status s;
const auto fs = FileSystem::Default();
IOOptions io_opts;
s = fs->FileExists(filename, io_opts, nullptr);
if (!s.ok()) {
return s;
}
uint64_t file_size = 0;
s = fs->GetFileSize(filename, io_opts, &file_size, nullptr);
if (!s.ok()) {
return s;
}
std::unique_ptr<FSRandomAccessFile> file;
s = fs->NewRandomAccessFile(filename, FileOptions(), &file, nullptr);
if (!s.ok()) {
return s;
}
file = NewReadaheadRandomAccessFile(std::move(file), kReadaheadSize);
if (file_size == 0) {
return Status::Corruption("File is empty.");
}
reader_.reset(new RandomAccessFileReader(std::move(file), filename));
uint64_t offset = 0;
uint64_t footer_offset = 0;
CompressionType compression = kNoCompression;
s = DumpBlobLogHeader(&offset, &compression);
if (!s.ok()) {
return s;
}
s = DumpBlobLogFooter(file_size, &footer_offset);
if (!s.ok()) {
return s;
}
uint64_t total_records = 0;
uint64_t total_key_size = 0;
uint64_t total_blob_size = 0;
uint64_t total_uncompressed_blob_size = 0;
if (show_key != DisplayType::kNone || show_summary) {
while (offset < footer_offset) {
s = DumpRecord(show_key, show_blob, show_uncompressed_blob, show_summary,
compression, &offset, &total_records, &total_key_size,
&total_blob_size, &total_uncompressed_blob_size);
if (!s.ok()) {
break;
}
}
}
if (show_summary) {
fprintf(stdout, "Summary:\n");
fprintf(stdout, " total records: %" PRIu64 "\n", total_records);
fprintf(stdout, " total key size: %" PRIu64 "\n", total_key_size);
fprintf(stdout, " total blob size: %" PRIu64 "\n", total_blob_size);
if (compression != kNoCompression) {
fprintf(stdout, " total raw blob size: %" PRIu64 "\n",
total_uncompressed_blob_size);
}
}
return s;
}
Status BlobDumpTool::Read(uint64_t offset, size_t size, Slice* result) {
if (buffer_size_ < size) {
if (buffer_size_ == 0) {
buffer_size_ = 4096;
}
while (buffer_size_ < size) {
buffer_size_ *= 2;
}
buffer_.reset(new char[buffer_size_]);
}
Status s =
reader_->Read(IOOptions(), offset, size, result, buffer_.get(), nullptr);
if (!s.ok()) {
return s;
}
if (result->size() != size) {
return Status::Corruption("Reach the end of the file unexpectedly.");
}
return s;
}
Status BlobDumpTool::DumpBlobLogHeader(uint64_t* offset,
CompressionType* compression) {
Slice slice;
Status s = Read(0, BlobLogHeader::kSize, &slice);
if (!s.ok()) {
return s;
}
BlobLogHeader header;
s = header.DecodeFrom(slice);
if (!s.ok()) {
return s;
}
fprintf(stdout, "Blob log header:\n");
fprintf(stdout, " Version : %" PRIu32 "\n", header.version);
fprintf(stdout, " Column Family ID : %" PRIu32 "\n",
header.column_family_id);
std::string compression_str;
if (!GetStringFromCompressionType(&compression_str, header.compression)
.ok()) {
compression_str = "Unrecongnized compression type (" +
std::to_string((int)header.compression) + ")";
}
fprintf(stdout, " Compression : %s\n", compression_str.c_str());
fprintf(stdout, " Expiration range : %s\n",
GetString(header.expiration_range).c_str());
*offset = BlobLogHeader::kSize;
*compression = header.compression;
return s;
}
Status BlobDumpTool::DumpBlobLogFooter(uint64_t file_size,
uint64_t* footer_offset) {
auto no_footer = [&]() {
*footer_offset = file_size;
fprintf(stdout, "No blob log footer.\n");
return Status::OK();
};
if (file_size < BlobLogHeader::kSize + BlobLogFooter::kSize) {
return no_footer();
}
Slice slice;
*footer_offset = file_size - BlobLogFooter::kSize;
Status s = Read(*footer_offset, BlobLogFooter::kSize, &slice);
if (!s.ok()) {
return s;
}
BlobLogFooter footer;
s = footer.DecodeFrom(slice);
if (!s.ok()) {
return no_footer();
}
fprintf(stdout, "Blob log footer:\n");
fprintf(stdout, " Blob count : %" PRIu64 "\n", footer.blob_count);
fprintf(stdout, " Expiration Range : %s\n",
GetString(footer.expiration_range).c_str());
return s;
}
Status BlobDumpTool::DumpRecord(DisplayType show_key, DisplayType show_blob,
DisplayType show_uncompressed_blob,
bool show_summary, CompressionType compression,
uint64_t* offset, uint64_t* total_records,
uint64_t* total_key_size,
uint64_t* total_blob_size,
uint64_t* total_uncompressed_blob_size) {
if (show_key != DisplayType::kNone) {
fprintf(stdout, "Read record with offset 0x%" PRIx64 " (%" PRIu64 "):\n",
*offset, *offset);
}
Slice slice;
Status s = Read(*offset, BlobLogRecord::kHeaderSize, &slice);
if (!s.ok()) {
return s;
}
BlobLogRecord record;
s = record.DecodeHeaderFrom(slice);
if (!s.ok()) {
return s;
}
uint64_t key_size = record.key_size;
uint64_t value_size = record.value_size;
if (show_key != DisplayType::kNone) {
fprintf(stdout, " key size : %" PRIu64 "\n", key_size);
fprintf(stdout, " value size : %" PRIu64 "\n", value_size);
fprintf(stdout, " expiration : %" PRIu64 "\n", record.expiration);
}
*offset += BlobLogRecord::kHeaderSize;
s = Read(*offset, static_cast<size_t>(key_size + value_size), &slice);
if (!s.ok()) {
return s;
}
// Decompress value
std::string uncompressed_value;
if (compression != kNoCompression &&
(show_uncompressed_blob != DisplayType::kNone || show_summary)) {
BlockContents contents;
UncompressionContext context(compression);
UncompressionInfo info(context, UncompressionDict::GetEmptyDict(),
compression);
s = DecompressBlockData(
slice.data() + key_size, static_cast<size_t>(value_size), compression,
BlobDecompressor(), &contents, ImmutableOptions(Options()));
if (!s.ok()) {
return s;
}
uncompressed_value = contents.data.ToString();
}
if (show_key != DisplayType::kNone) {
fprintf(stdout, " key : ");
DumpSlice(Slice(slice.data(), static_cast<size_t>(key_size)), show_key);
if (show_blob != DisplayType::kNone) {
fprintf(stdout, " blob : ");
DumpSlice(Slice(slice.data() + static_cast<size_t>(key_size),
static_cast<size_t>(value_size)),
show_blob);
}
if (show_uncompressed_blob != DisplayType::kNone) {
fprintf(stdout, " raw blob : ");
DumpSlice(Slice(uncompressed_value), show_uncompressed_blob);
}
}
*offset += key_size + value_size;
*total_records += 1;
*total_key_size += key_size;
*total_blob_size += value_size;
*total_uncompressed_blob_size += uncompressed_value.size();
return s;
}
void BlobDumpTool::DumpSlice(const Slice s, DisplayType type) {
if (type == DisplayType::kRaw) {
fprintf(stdout, "%s\n", s.ToString().c_str());
} else if (type == DisplayType::kHex) {
fprintf(stdout, "%s\n", s.ToString(true /*hex*/).c_str());
} else if (type == DisplayType::kDetail) {
char buf[100];
for (size_t i = 0; i < s.size(); i += 16) {
memset(buf, 0, sizeof(buf));
for (size_t j = 0; j < 16 && i + j < s.size(); j++) {
unsigned char c = s[i + j];
snprintf(buf + j * 3 + 15, 2, "%x", c >> 4);
snprintf(buf + j * 3 + 16, 2, "%x", c & 0xf);
snprintf(buf + j + 65, 2, "%c", (0x20 <= c && c <= 0x7e) ? c : '.');
}
for (size_t p = 0; p + 1 < sizeof(buf); p++) {
if (buf[p] == 0) {
buf[p] = ' ';
}
}
fprintf(stdout, "%s\n", i == 0 ? buf + 15 : buf);
}
}
}
template <class T>
std::string BlobDumpTool::GetString(std::pair<T, T> p) {
if (p.first == 0 && p.second == 0) {
return "nil";
}
return "(" + std::to_string(p.first) + ", " + std::to_string(p.second) + ")";
}
} // namespace ROCKSDB_NAMESPACE::blob_db