Summary: This change builds on https://github.com/facebook/rocksdb/issues/13540 and https://github.com/facebook/rocksdb/issues/13626 in allowing a CompressionManager / Compressor / Decompressor to use a custom compression algorithm, with a distinct CompressionType. For background, review the API comments on CompressionManager and its CompatibilityName() function. Highlights: * Reserve and name 127 new CompressionTypes that can be used for custom compression algorithms / schemas. In many or most cases I expect the enumerators such as `kCustomCompression8F` to be used in user code rather than casting between integers and CompressionTypes, as I expect the supported custom compression algorithms to be identifiable / enumerable at compile time. * When using these custom compression types, a CompressionManager must use a CompatibilityName() other than the built-in one AND new format_version=7 (see below). * When building new SST files, track the full set of CompressionTypes actually used (usually just one aside from kNoCompression), using our efficient bitset SmallEnumSet, which supports fast iteration over the bits set to 1. Ideally, to support mixed or non-mixed compression algorithms in a file as efficiently as possible, we would know the set of CompressionTypes as SST file open time. * New schema for `TableProperties::compression_name` in format_version=7 to represent the CompressionManager's CompatibilityName(), the set of CompressionTypes used, and potentially more in the future, while keeping the data relatively human-readable. * It would be possible to do this without a new format_version, but then the only way to ensure incompatible versions fail is with an unsupported CompressionType tag, not with a compression_name property. Therefore, (a) I prefer not to put something misleading in the `compression_name` property (a built-in compression name) when there is nuance because of a CompressionManager, and (b) I prefer better, more consistent error messages that refer to either format_version or the CompressionManager's CompatibilityName(), rather than an unrecognized custom CompressionType value (which could have come from various CompressionManagers). * The current configured CompressionManager is passed in to TableReaders so that it (or one it knows about) can be used if it matches the CompatibilityName() used for compression in the SST file. Until the connection with ObjectRegistry is implemented, the only way to read files generated with a particular CompressionManager using custom compression algorithms is to configure it (or a known relative; see FindCompatibleCompressionManager()) in the ColumnFamilyOptions. * Optimized snappy compression with BuiltinDecompressorV2SnappyOnly, to offset some small added overheads with the new tracking. This is essentially an early part of the planned refactoring that will get rid of the old internal compression APIs. * Another small optimization in eliminating an unnecessary key copy in flush (builder.cc). * Fix some handling of named CompressionManagers in CompressionManager::CreateFromString() (problem seen in https://github.com/facebook/rocksdb/issues/13647) Smaller things: * Adds Name() and GetId() functions to Compressor for debugging/logging purposes. (Compressor and Decompressor are not expected to be Customizable because they are only instantiated by a CompressionManager.) * When using an explicit compression_manager, the GetId() of the CompressionManager and the Compressor used to build the file are stored as bonus entries in the compression_options table property. This table property is not parsed anywhere, so it is currently for human reading, but still could be parsed with the new underscore-prefixed bonus entries. IMHO, this is preferable to additional table properties, which would increase memory fragmentation in the TableProperties objects and likely take slightly more CPU on SST open and slightly more storage. * ReleaseWorkingArea() function from protected to public to make wrappers work, because of a quirk in C++ (vs. Java) in which you cannot access protected members of another instance of the same class (sigh) * Added `CompressionManager:: SupportsCompressionType()` for early options sanity checking. Follow-up before release: * Make format_version=7 official / supported * Stress test coverage Sooner than later: * Update tests for RoundRobinManager and SimpleMixedCompressionManager to take advantage of e.g. set of compression types in compression_name property * ObjectRegistry stuff * Refactor away old internal compression APIs Pull Request resolved: https://github.com/facebook/rocksdb/pull/13659 Test Plan: Basic unit test added. ## Performance ### SST write performance ``` SUFFIX=`tty | sed 's|/|_|g'`; for ARGS in "-compression_type=none" "-compression_type=snappy" "-compression_type=zstd" "-compression_type=snappy -verify_compression=1" "-compression_type=zstd -verify_compression=1" "-compression_type=zstd -compression_max_dict_bytes=8180"; do echo $ARGS; (for I in `seq 1 20`; do BIN=/dev/shm/dbbench${SUFFIX}.bin; rm -f $BIN; cp db_bench $BIN; $BIN -db=/dev/shm/dbbench$SUFFIX --benchmarks=fillseq -num=10000000 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=1000 -fifo_compaction_allow_compaction=0 -disable_wal -write_buffer_size=12000000 -format_version=7 $ARGS 2>&1 | grep micros/op; done) | awk '{n++; sum += $5;} END { print int(sum / n); }'; done ``` Ops/sec, Before -> After, both fv=6: -compression_type=none 1894386 -> 1858403 (-2.0%) -compression_type=snappy 1859131 -> 1807469 (-2.8%) -compression_type=zstd 1191428 -> 1214374 (+1.9%) -compression_type=snappy -verify_compression=1 1861819 -> 1858342 (+0.2%) -compression_type=zstd -verify_compression=1 979435 -> 995870 (+1.6%) -compression_type=zstd -compression_max_dict_bytes=8180 905349 -> 940563 (+3.9%) Ops/sec, Before fv=6 -> After fv=7: -compression_type=none 1879365 -> 1836159 (-2.3%) -compression_type=snappy 1865460 -> 1830916 (-1.9%) -compression_type=zstd 1191428 -> 1210260 (+1.6%) -compression_type=snappy -verify_compression=1 1866756 -> 1818989 (-2.6%) -compression_type=zstd -verify_compression=1 982640 -> 997129 (+1.5%) -compression_type=zstd -compression_max_dict_bytes=8180 912608 -> 937248 (+2.7%) ### SST read performance Create DBs ``` for COMP in none snappy zstd; do echo $ARGS; ./db_bench -db=/dev/shm/dbbench-7-$COMP --benchmarks=fillseq,flush -num=10000000 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=1000 -fifo_compaction_allow_compaction=0 -disable_wal -write_buffer_size=12000000 -compression_type=$COMP -format_version=7; done ``` And test ``` for COMP in none snappy zstd none; do echo $COMP; (for I in `seq 1 8`; do ./db_bench -readonly -db=/dev/shm/dbbench -7-$COMP --benchmarks=readrandom -num=10000000 -duration=20 -threads=8 2>&1 | grep micros/op; done ) | awk '{n++; sum += $5;} END { print int(sum / n); }'; done ``` Ops/sec, Before -> After (both fv=6) none 1491732 -> 1500209 (+0.6%) snappy 1157216 -> 1169202 (+1.0%) zstd 695414 -> 703719 (+1.2%) none (again) 1491787 -> 1528789 (+2.4%) Ops/sec, Before fv=6 -> After fv=7: none 1492278 -> 1508668 (+1.1%) snappy 1140769 -> 1152613 (+1.0%) zstd 696437 -> 696511 (+0.0%) none (again) 1500585 -> 1512037 (+0.7%) Overall, I think we can take the read CPU improvement in exchange for the hit (in some cases) on background write CPU Reviewed By: hx235 Differential Revision: D76520739 Pulled By: pdillinger fbshipit-source-id: e73bd72502ff85c8779cba313f26f7d1fd50be3a
250 lines
9.8 KiB
C++
250 lines
9.8 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
//
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
#pragma once
|
|
|
|
#include <stdint.h>
|
|
|
|
#include <string>
|
|
#include <utility>
|
|
#include <vector>
|
|
|
|
#include "db/dbformat.h"
|
|
#include "db/seqno_to_time_mapping.h"
|
|
#include "db/table_properties_collector.h"
|
|
#include "file/writable_file_writer.h"
|
|
#include "options/cf_options.h"
|
|
#include "rocksdb/options.h"
|
|
#include "rocksdb/table_properties.h"
|
|
#include "table/unique_id_impl.h"
|
|
#include "trace_replay/block_cache_tracer.h"
|
|
#include "util/cast_util.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
class Slice;
|
|
class Status;
|
|
|
|
struct TableReaderOptions {
|
|
// @param skip_filters Disables loading/accessing the filter block
|
|
TableReaderOptions(
|
|
const ImmutableOptions& _ioptions,
|
|
const std::shared_ptr<const SliceTransform>& _prefix_extractor,
|
|
UnownedPtr<CompressionManager> _compression_manager,
|
|
const EnvOptions& _env_options,
|
|
const InternalKeyComparator& _internal_comparator,
|
|
uint8_t _block_protection_bytes_per_key, bool _skip_filters = false,
|
|
bool _immortal = false, bool _force_direct_prefetch = false,
|
|
int _level = -1, BlockCacheTracer* const _block_cache_tracer = nullptr,
|
|
size_t _max_file_size_for_l0_meta_pin = 0,
|
|
const std::string& _cur_db_session_id = "", uint64_t _cur_file_num = 0,
|
|
UniqueId64x2 _unique_id = {}, SequenceNumber _largest_seqno = 0,
|
|
uint64_t _tail_size = 0, bool _user_defined_timestamps_persisted = true)
|
|
: ioptions(_ioptions),
|
|
prefix_extractor(_prefix_extractor),
|
|
compression_manager(_compression_manager),
|
|
env_options(_env_options),
|
|
internal_comparator(_internal_comparator),
|
|
skip_filters(_skip_filters),
|
|
immortal(_immortal),
|
|
force_direct_prefetch(_force_direct_prefetch),
|
|
level(_level),
|
|
largest_seqno(_largest_seqno),
|
|
block_cache_tracer(_block_cache_tracer),
|
|
max_file_size_for_l0_meta_pin(_max_file_size_for_l0_meta_pin),
|
|
cur_db_session_id(_cur_db_session_id),
|
|
cur_file_num(_cur_file_num),
|
|
unique_id(_unique_id),
|
|
block_protection_bytes_per_key(_block_protection_bytes_per_key),
|
|
tail_size(_tail_size),
|
|
user_defined_timestamps_persisted(_user_defined_timestamps_persisted) {}
|
|
|
|
const ImmutableOptions& ioptions;
|
|
const std::shared_ptr<const SliceTransform>& prefix_extractor;
|
|
// NOTE: the compression manager is not saved, just potentially a decompressor
|
|
// from it, so we don't need a shared_ptr copy
|
|
UnownedPtr<CompressionManager> compression_manager;
|
|
const EnvOptions& env_options;
|
|
const InternalKeyComparator& internal_comparator;
|
|
// This is only used for BlockBasedTable (reader)
|
|
bool skip_filters;
|
|
// Whether the table will be valid as long as the DB is open
|
|
bool immortal;
|
|
// When data prefetching is needed, even if direct I/O is off, read data to
|
|
// fetch into RocksDB's buffer, rather than relying
|
|
// RandomAccessFile::Prefetch().
|
|
bool force_direct_prefetch;
|
|
// What level this table/file is on, -1 for "not set, don't know." Used
|
|
// for level-specific statistics.
|
|
int level;
|
|
// largest seqno in the table (or 0 means unknown???)
|
|
SequenceNumber largest_seqno;
|
|
BlockCacheTracer* const block_cache_tracer;
|
|
// Largest L0 file size whose meta-blocks may be pinned (can be zero when
|
|
// unknown).
|
|
const size_t max_file_size_for_l0_meta_pin;
|
|
|
|
std::string cur_db_session_id;
|
|
|
|
uint64_t cur_file_num;
|
|
|
|
// Known unique_id or {}, kNullUniqueId64x2 means unknown
|
|
UniqueId64x2 unique_id;
|
|
|
|
uint8_t block_protection_bytes_per_key;
|
|
|
|
uint64_t tail_size;
|
|
|
|
// Whether the key in the table contains user-defined timestamps.
|
|
bool user_defined_timestamps_persisted;
|
|
};
|
|
|
|
struct TableBuilderOptions : public TablePropertiesCollectorFactory::Context {
|
|
TableBuilderOptions(
|
|
const ImmutableOptions& _ioptions, const MutableCFOptions& _moptions,
|
|
const ReadOptions& _read_options, const WriteOptions& _write_options,
|
|
const InternalKeyComparator& _internal_comparator,
|
|
const InternalTblPropCollFactories* _internal_tbl_prop_coll_factories,
|
|
CompressionType _compression_type,
|
|
const CompressionOptions& _compression_opts, uint32_t _column_family_id,
|
|
const std::string& _column_family_name, int _level,
|
|
const int64_t _newest_key_time, bool _is_bottommost = false,
|
|
TableFileCreationReason _reason = TableFileCreationReason::kMisc,
|
|
const int64_t _oldest_key_time = 0,
|
|
const uint64_t _file_creation_time = 0, const std::string& _db_id = "",
|
|
const std::string& _db_session_id = "",
|
|
const uint64_t _target_file_size = 0, const uint64_t _cur_file_num = 0,
|
|
const SequenceNumber _last_level_inclusive_max_seqno_threshold =
|
|
kMaxSequenceNumber)
|
|
: TablePropertiesCollectorFactory::Context(
|
|
_column_family_id, _level, _ioptions.num_levels,
|
|
_last_level_inclusive_max_seqno_threshold),
|
|
ioptions(_ioptions),
|
|
moptions(_moptions),
|
|
read_options(_read_options),
|
|
write_options(_write_options),
|
|
internal_comparator(_internal_comparator),
|
|
internal_tbl_prop_coll_factories(_internal_tbl_prop_coll_factories),
|
|
compression_type(_compression_type),
|
|
compression_opts(_compression_opts),
|
|
column_family_name(_column_family_name),
|
|
oldest_key_time(_oldest_key_time),
|
|
newest_key_time(_newest_key_time),
|
|
target_file_size(_target_file_size),
|
|
file_creation_time(_file_creation_time),
|
|
db_id(_db_id),
|
|
db_session_id(_db_session_id),
|
|
is_bottommost(_is_bottommost),
|
|
reason(_reason),
|
|
cur_file_num(_cur_file_num) {}
|
|
|
|
const ImmutableOptions& ioptions;
|
|
const MutableCFOptions& moptions;
|
|
const ReadOptions& read_options;
|
|
const WriteOptions& write_options;
|
|
const InternalKeyComparator& internal_comparator;
|
|
const InternalTblPropCollFactories* internal_tbl_prop_coll_factories;
|
|
const CompressionType compression_type;
|
|
const CompressionOptions& compression_opts;
|
|
const std::string& column_family_name;
|
|
const int64_t oldest_key_time;
|
|
const int64_t newest_key_time;
|
|
const uint64_t target_file_size;
|
|
const uint64_t file_creation_time;
|
|
const std::string db_id;
|
|
const std::string db_session_id;
|
|
// BEGIN for FilterBuildingContext
|
|
const bool is_bottommost;
|
|
const TableFileCreationReason reason;
|
|
// END for FilterBuildingContext
|
|
|
|
// XXX: only used by BlockBasedTableBuilder for SstFileWriter. If you
|
|
// want to skip filters, that should be (for example) null filter_policy
|
|
// in the table options of the ioptions.table_factory
|
|
bool skip_filters = false;
|
|
const uint64_t cur_file_num;
|
|
};
|
|
|
|
// TableBuilder provides the interface used to build a Table
|
|
// (an immutable and sorted map from keys to values).
|
|
//
|
|
// Multiple threads can invoke const methods on a TableBuilder without
|
|
// external synchronization, but if any of the threads may call a
|
|
// non-const method, all threads accessing the same TableBuilder must use
|
|
// external synchronization.
|
|
class TableBuilder {
|
|
public:
|
|
// REQUIRES: Either Finish() or Abandon() has been called.
|
|
virtual ~TableBuilder() {}
|
|
|
|
// Add key,value to the table being constructed.
|
|
// REQUIRES: key is after any previously added key according to comparator.
|
|
// REQUIRES: Finish(), Abandon() have not been called
|
|
virtual void Add(const Slice& key, const Slice& value) = 0;
|
|
|
|
// Return non-ok iff some error has been detected.
|
|
virtual Status status() const = 0;
|
|
|
|
// Return non-ok iff some error happens during IO.
|
|
virtual IOStatus io_status() const = 0;
|
|
|
|
// Finish building the table.
|
|
// REQUIRES: Finish(), Abandon() have not been called
|
|
virtual Status Finish() = 0;
|
|
|
|
// Indicate that the contents of this builder should be abandoned.
|
|
// If the caller is not going to call Finish(), it must call Abandon()
|
|
// before destroying this builder.
|
|
// REQUIRES: Finish(), Abandon() have not been called
|
|
virtual void Abandon() = 0;
|
|
|
|
// Number of calls to Add() so far.
|
|
virtual uint64_t NumEntries() const = 0;
|
|
|
|
// Whether the output file is completely empty. It has neither entries
|
|
// or tombstones.
|
|
virtual bool IsEmpty() const {
|
|
return NumEntries() == 0 && GetTableProperties().num_range_deletions == 0;
|
|
}
|
|
|
|
// Size of the file before its content is compressed.
|
|
virtual uint64_t PreCompressionSize() const { return 0; }
|
|
|
|
// Size of the file generated so far. If invoked after a successful
|
|
// Finish() call, returns the size of the final generated file.
|
|
virtual uint64_t FileSize() const = 0;
|
|
|
|
// Estimated size of the file generated so far. This is used when
|
|
// FileSize() cannot estimate final SST size, e.g. parallel compression
|
|
// is enabled.
|
|
virtual uint64_t EstimatedFileSize() const { return FileSize(); }
|
|
|
|
virtual uint64_t GetTailSize() const { return 0; }
|
|
|
|
// If the user defined table properties collector suggest the file to
|
|
// be further compacted.
|
|
virtual bool NeedCompact() const { return false; }
|
|
|
|
// Returns table properties
|
|
virtual TableProperties GetTableProperties() const = 0;
|
|
|
|
// Return file checksum
|
|
virtual std::string GetFileChecksum() const = 0;
|
|
|
|
// Return file checksum function name
|
|
virtual const char* GetFileChecksumFuncName() const = 0;
|
|
|
|
// Set the sequence number to time mapping. `relevant_mapping` must be in
|
|
// enforced state (ready to encode to string).
|
|
virtual void SetSeqnoTimeTableProperties(
|
|
const SeqnoToTimeMapping& /*relevant_mapping*/,
|
|
uint64_t /*oldest_ancestor_time*/) {}
|
|
};
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|