Summary:
The existing format compatibility test had limited coverage of compression options, particularly newer algorithms with and without dictionary compression. There are some subtleties that need to remain consistent, such as index blocks potentially being compressed but *not* using the file's dictionary if they are. This involves detecting (with a rough approximation) builds with the appropriate capabilities.
The other motivation for this change is testing some potentially useful reader-side functionality that has been in place for a long time but has not been exercised until now: mixing compressions in a single SST file. The block-based SST schema puts a compression marker on each block; arguably this is for distinguishing blocks compressed using the algorithm stored in compression_name table property from blocks left uncompressed, e.g. because they did not reach the threshold of useful compression ratio, but the marker can also distinguish compression algorithms / decompressors.
As we work toward customizable compression, it seems worth unlocking the capability to leverage the existing schema and SST reader-side support for mixing compression algorithms among the blocks of a file. Yes, a custom compression could implement its own dynamic algorithm chooser with its own tag on the compressed data (e.g. first byte), but that is slightly less storage efficient and doesn't support "vanilla" RocksDB builds reading files using a mix of built-in algorithms. As a hypothetical example, we might want to switch to lz4 on a machine that is under heavy CPU load and back to zstd when load is more normal. I dug up some data indicating ~30 seconds per output file in compaction, suggesting that file-level responsiveness might be too slow. This agility is perhaps more useful with disaggregated storage, where there is more flexibility in DB storage footprint and potentially more payoff in optimizing the *average* footprint.
In support of this direction, I have added a backdoor capability for debug builds of `ldb` to generate files with a mix of compression algorithms and incorporated this into the format compatibility test. All of the existing "forward compatible" versions (currently back to 8.6) are able to read the files generated with "mixed" compression. (NOTE: there's no easy way to patch a bunch of old versions to have them support generating mixed compression files, but going forward we can auto-detect builds with this "mixed" capability.) A subtle aspect of this support that is that for proper handling of decompression contexts and digested dictionaries, we need to set the `compression_name` table property to `zstd` if any blocks are zstd compressed. I'm expecting to add better info to SST files in follow-up, but this approach here gives us forward compatibility back to 8.6.
However, in the spirit of opening things up with what makes sense under the existing schema, we only support one compression dictionary per file. It will be used by any/all algorithms that support dictionary compression. This is not outrageous because it seems standard that a dictionary is *or can be* arbitrary data representative of what will be compressed. This means we would need a schema change to add dictionary compression support to an existing built-in compression algorithm (because otherwise old versions and new versions would disagree on whether the data dictionary is needed with that algorithm; this could take the form of a new built-in compression type, e.g. `kSnappyCompressionWithDict`; only snappy, bzip2, and windows-only xpress compression lack dictionary support currently).
Looking ahead to supporting custom compression, exposing a sizeable set of CompressionTypes to the user for custom handling essentially guarantees a path for the user to put *versioning* on their compression even if they neglect that initially, and without resorting to managing a bunch of distinct named entities. (I'm envisioning perhaps 64 or 127 CompressionTypes open to customization, enough for ~weekly new releases with more than a year of horizon on recycling.)
More details:
* Reduce the running time (CI cost) of the default format compatibility test by randomly sampling versions that aren't the oldest in a category. AFAIK, pretty much all regressions can be caught with the even more stripped-down SHORT_TEST.
* Configurable make parallelism with J environment variable
* Generate data files in a way that makes them much more eligible for index compression, e.g. bigger keys with less entropy
* Generate enough data files
* Remove 2.7.fb.branch from list because it shows an assertion violation when involving compression.
* Randomly choose a contiguous subset of the compression algorithms X {dictionary, no dictionary} configuration space when generating files, with a number of files > number of algorithms. This covers all the algorithms and both dictionary/no dictionary for each release (but not in all combinations).
* Have `ldb` fail if the specified compression type is not supported by the build.
Other future work needed:
* Blob files in format compatibility test, and support for mixed compression. NOTE: the blob file schema should naturally support mixing compression algorithms but the reader code does not because of an assertion that the block CompressionType (if not no compression) matches the whole file CompressionType. We might introduce a "various" CompressionType for this whole file marker in blob files.
* Do more to ensure certain features and code paths e.g. in the scripts are actually used in the compatibility test, so that they aren't accidentally neutralized.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/13414
Test Plan: Manual runs with some temporary instrumentation, also a recent revision of this change included a GitHub Actions run of the updated format compatible test: https://github.com/facebook/rocksdb/actions/runs/13463551149/job/37624205915?pr=13414
Reviewed By: hx235
Differential Revision: D70012056
Pulled By: pdillinger
fbshipit-source-id: 9ea5db76ba01a95338ed1a86b0edd71a469c4061
119 lines
4.8 KiB
C++
119 lines
4.8 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
#pragma once
|
|
|
|
#include <map>
|
|
#include <stdexcept>
|
|
#include <string>
|
|
#include <vector>
|
|
|
|
#include "rocksdb/advanced_options.h"
|
|
#include "rocksdb/options.h"
|
|
#include "rocksdb/status.h"
|
|
#include "rocksdb/table.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
struct ColumnFamilyOptions;
|
|
struct ConfigOptions;
|
|
struct DBOptions;
|
|
struct ImmutableCFOptions;
|
|
struct ImmutableDBOptions;
|
|
struct MutableDBOptions;
|
|
struct MutableCFOptions;
|
|
struct Options;
|
|
|
|
const std::vector<CompressionType>& GetSupportedCompressions();
|
|
|
|
const std::vector<CompressionType>& GetSupportedDictCompressions();
|
|
|
|
const std::vector<ChecksumType>& GetSupportedChecksums();
|
|
|
|
inline bool IsSupportedChecksumType(ChecksumType type) {
|
|
// Avoid annoying compiler warning-as-error (-Werror=type-limits)
|
|
auto min = kNoChecksum;
|
|
auto max = kXXH3;
|
|
return type >= min && type <= max;
|
|
}
|
|
|
|
// Checks that the combination of DBOptions and ColumnFamilyOptions are valid
|
|
Status ValidateOptions(const DBOptions& db_opts,
|
|
const ColumnFamilyOptions& cf_opts);
|
|
|
|
DBOptions BuildDBOptions(const ImmutableDBOptions& immutable_db_options,
|
|
const MutableDBOptions& mutable_db_options);
|
|
// Overwrites `options`
|
|
void BuildDBOptions(const ImmutableDBOptions& immutable_db_options,
|
|
const MutableDBOptions& mutable_db_options,
|
|
DBOptions& options);
|
|
|
|
ColumnFamilyOptions BuildColumnFamilyOptions(
|
|
const ColumnFamilyOptions& ioptions,
|
|
const MutableCFOptions& mutable_cf_options);
|
|
|
|
void UpdateColumnFamilyOptions(const ImmutableCFOptions& ioptions,
|
|
ColumnFamilyOptions* cf_opts);
|
|
void UpdateColumnFamilyOptions(const MutableCFOptions& moptions,
|
|
ColumnFamilyOptions* cf_opts);
|
|
|
|
std::unique_ptr<Configurable> DBOptionsAsConfigurable(
|
|
const MutableDBOptions& opts);
|
|
std::unique_ptr<Configurable> DBOptionsAsConfigurable(
|
|
const DBOptions& opts,
|
|
const std::unordered_map<std::string, std::string>* opt_map = nullptr);
|
|
std::unique_ptr<Configurable> CFOptionsAsConfigurable(
|
|
const MutableCFOptions& opts);
|
|
std::unique_ptr<Configurable> CFOptionsAsConfigurable(
|
|
const ColumnFamilyOptions& opts,
|
|
const std::unordered_map<std::string, std::string>* opt_map = nullptr);
|
|
|
|
Status StringToMap(const std::string& opts_str,
|
|
std::unordered_map<std::string, std::string>* opts_map);
|
|
|
|
struct OptionsHelper {
|
|
static const std::string kCFOptionsName /*= "ColumnFamilyOptions"*/;
|
|
static const std::string kDBOptionsName /*= "DBOptions" */;
|
|
static std::map<CompactionStyle, std::string> compaction_style_to_string;
|
|
static std::map<CompactionPri, std::string> compaction_pri_to_string;
|
|
static std::map<CompactionStopStyle, std::string>
|
|
compaction_stop_style_to_string;
|
|
static std::map<Temperature, std::string> temperature_to_string;
|
|
static std::unordered_map<std::string, ChecksumType> checksum_type_string_map;
|
|
static std::unordered_map<std::string, CompressionType>
|
|
compression_type_string_map;
|
|
static std::unordered_map<std::string, PrepopulateBlobCache>
|
|
prepopulate_blob_cache_string_map;
|
|
static std::unordered_map<std::string, CompactionStopStyle>
|
|
compaction_stop_style_string_map;
|
|
static std::unordered_map<std::string, EncodingType> encoding_type_string_map;
|
|
static std::unordered_map<std::string, CompactionStyle>
|
|
compaction_style_string_map;
|
|
static std::unordered_map<std::string, CompactionPri>
|
|
compaction_pri_string_map;
|
|
static std::unordered_map<std::string, Temperature> temperature_string_map;
|
|
};
|
|
|
|
// Some aliasing
|
|
static auto& compaction_style_to_string =
|
|
OptionsHelper::compaction_style_to_string;
|
|
static auto& compaction_pri_to_string = OptionsHelper::compaction_pri_to_string;
|
|
static auto& compaction_stop_style_to_string =
|
|
OptionsHelper::compaction_stop_style_to_string;
|
|
static auto& temperature_to_string = OptionsHelper::temperature_to_string;
|
|
static auto& checksum_type_string_map = OptionsHelper::checksum_type_string_map;
|
|
static auto& compaction_stop_style_string_map =
|
|
OptionsHelper::compaction_stop_style_string_map;
|
|
static auto& compression_type_string_map =
|
|
OptionsHelper::compression_type_string_map;
|
|
static auto& encoding_type_string_map = OptionsHelper::encoding_type_string_map;
|
|
static auto& compaction_style_string_map =
|
|
OptionsHelper::compaction_style_string_map;
|
|
static auto& compaction_pri_string_map =
|
|
OptionsHelper::compaction_pri_string_map;
|
|
static auto& temperature_string_map = OptionsHelper::temperature_string_map;
|
|
static auto& prepopulate_blob_cache_string_map =
|
|
OptionsHelper::prepopulate_blob_cache_string_map;
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|