rocksdb/fuzz
Peter Dillinger 9d490593d0 Preliminary support for custom compression algorithms (#13659)
Summary:
This change builds on https://github.com/facebook/rocksdb/issues/13540 and https://github.com/facebook/rocksdb/issues/13626 in allowing a CompressionManager / Compressor / Decompressor to use a custom compression algorithm, with a distinct CompressionType. For background, review the API comments on CompressionManager and its CompatibilityName() function.

Highlights:
* Reserve and name 127 new CompressionTypes that can be used for custom compression algorithms / schemas. In many or most cases I expect the enumerators such as `kCustomCompression8F` to be used in user code rather than casting between integers and CompressionTypes, as I expect the supported custom compression algorithms to be identifiable / enumerable at compile time.
* When using these custom compression types, a CompressionManager must use a CompatibilityName() other than the built-in one AND new format_version=7 (see below).
* When building new SST files, track the full set of CompressionTypes actually used (usually just one aside from kNoCompression), using our efficient bitset SmallEnumSet, which supports fast iteration over the bits set to 1. Ideally, to support mixed or non-mixed compression algorithms in a file as efficiently as possible, we would know the set of CompressionTypes as SST file open time.
* New schema for `TableProperties::compression_name` in format_version=7 to represent the CompressionManager's CompatibilityName(), the set of CompressionTypes used, and potentially more in the future, while keeping the data relatively human-readable.
  * It would be possible to do this without a new format_version, but then the only way to ensure incompatible versions fail is with an unsupported CompressionType tag, not with a compression_name property. Therefore, (a) I prefer not to put something misleading in the `compression_name` property (a built-in compression name) when there is nuance because of a CompressionManager, and (b) I prefer better, more consistent error messages that refer to either format_version or the CompressionManager's CompatibilityName(), rather than an unrecognized custom CompressionType value (which could have come from various CompressionManagers).
* The current configured CompressionManager is passed in to TableReaders so that it (or one it knows about) can be used if it matches the CompatibilityName() used for compression in the SST file. Until the connection with ObjectRegistry is implemented, the only way to read files generated with a particular CompressionManager using custom compression algorithms is to configure it (or a known relative; see FindCompatibleCompressionManager()) in the ColumnFamilyOptions.
* Optimized snappy compression with BuiltinDecompressorV2SnappyOnly, to offset some small added overheads with the new tracking. This is essentially an early part of the planned refactoring that will get rid of the old internal compression APIs.
* Another small optimization in eliminating an unnecessary key copy in flush (builder.cc).
* Fix some handling of named CompressionManagers in CompressionManager::CreateFromString() (problem seen in https://github.com/facebook/rocksdb/issues/13647)

Smaller things:
* Adds Name() and GetId() functions to Compressor for debugging/logging purposes. (Compressor and Decompressor are not expected to be Customizable because they are only instantiated by a CompressionManager.)
* When using an explicit compression_manager, the GetId() of the CompressionManager and the Compressor used to build the file are stored as bonus entries in the compression_options table property. This table property is not parsed anywhere, so it is currently for human reading, but still could be parsed with the new underscore-prefixed bonus entries. IMHO, this is preferable to additional table properties, which would increase memory fragmentation in the TableProperties objects and likely take slightly more CPU on SST open and slightly more storage.
* ReleaseWorkingArea() function from protected to public to make wrappers work, because of a quirk in C++ (vs. Java) in which you cannot access protected members of another instance of the same class (sigh)
* Added `CompressionManager:: SupportsCompressionType()` for early options sanity checking.

Follow-up before release:
* Make format_version=7 official / supported
* Stress test coverage

Sooner than later:
* Update tests for RoundRobinManager and SimpleMixedCompressionManager to take advantage of e.g. set of compression types in compression_name property
* ObjectRegistry stuff
* Refactor away old internal compression APIs

Pull Request resolved: https://github.com/facebook/rocksdb/pull/13659

Test Plan:
Basic unit test added.

## Performance

### SST write performance
```
SUFFIX=`tty | sed 's|/|_|g'`; for ARGS in "-compression_type=none" "-compression_type=snappy" "-compression_type=zstd" "-compression_type=snappy -verify_compression=1" "-compression_type=zstd -verify_compression=1" "-compression_type=zstd -compression_max_dict_bytes=8180"; do echo $ARGS; (for I in `seq 1 20`; do BIN=/dev/shm/dbbench${SUFFIX}.bin; rm -f $BIN; cp db_bench $BIN; $BIN -db=/dev/shm/dbbench$SUFFIX --benchmarks=fillseq -num=10000000 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=1000 -fifo_compaction_allow_compaction=0 -disable_wal -write_buffer_size=12000000 -format_version=7 $ARGS 2>&1 | grep micros/op; done) | awk '{n++; sum += $5;} END { print int(sum / n); }'; done
```

Ops/sec, Before -> After, both fv=6:
-compression_type=none
1894386 -> 1858403 (-2.0%)
-compression_type=snappy
1859131 -> 1807469 (-2.8%)
-compression_type=zstd
1191428 -> 1214374 (+1.9%)
-compression_type=snappy -verify_compression=1
1861819 -> 1858342 (+0.2%)
-compression_type=zstd -verify_compression=1
979435 -> 995870 (+1.6%)
-compression_type=zstd -compression_max_dict_bytes=8180
905349 -> 940563 (+3.9%)

Ops/sec, Before fv=6 -> After fv=7:
-compression_type=none
1879365 -> 1836159 (-2.3%)
-compression_type=snappy
1865460 -> 1830916 (-1.9%)
-compression_type=zstd
1191428 -> 1210260 (+1.6%)
-compression_type=snappy -verify_compression=1
1866756 -> 1818989 (-2.6%)
-compression_type=zstd -verify_compression=1
982640 -> 997129 (+1.5%)
-compression_type=zstd -compression_max_dict_bytes=8180
912608 -> 937248 (+2.7%)

### SST read performance
Create DBs
```
for COMP in none snappy zstd; do echo $ARGS; ./db_bench -db=/dev/shm/dbbench-7-$COMP --benchmarks=fillseq,flush -num=10000000 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=1000 -fifo_compaction_allow_compaction=0 -disable_wal -write_buffer_size=12000000 -compression_type=$COMP -format_version=7; done
```
And test
```
for COMP in none
snappy zstd none; do echo $COMP; (for I in `seq 1 8`; do ./db_bench -readonly -db=/dev/shm/dbbench
-7-$COMP --benchmarks=readrandom -num=10000000 -duration=20 -threads=8 2>&1 | grep micros/op; done
) | awk '{n++; sum += $5;} END { print int(sum / n); }'; done
```

Ops/sec, Before -> After (both fv=6)
none
1491732 -> 1500209 (+0.6%)
snappy
1157216 -> 1169202 (+1.0%)
zstd
695414 -> 703719 (+1.2%)
none (again)
1491787 -> 1528789 (+2.4%)

Ops/sec, Before fv=6 -> After fv=7:
none
1492278 -> 1508668 (+1.1%)
snappy
1140769 -> 1152613 (+1.0%)
zstd
696437 -> 696511 (+0.0%)
none (again)
1500585 -> 1512037 (+0.7%)

Overall, I think we can take the read CPU improvement in exchange for the hit (in some cases) on background write CPU

Reviewed By: hx235

Differential Revision: D76520739

Pulled By: pdillinger

fbshipit-source-id: e73bd72502ff85c8779cba313f26f7d1fd50be3a
2025-06-16 14:19:03 -07:00
..
proto Update SstFileWriter fuzzer to iterate and check all key-value pairs (#7761) 2020-12-11 16:09:10 -08:00
.gitignore Fix compilation errors and add fuzzers to CircleCI (#9420) 2022-02-01 10:32:15 -08:00
db_fuzzer.cc Add some missing headers (#10519) 2022-08-11 12:45:50 -07:00
db_map_fuzzer.cc Make EventListener into a Customizable Class (#8473) 2021-07-27 07:47:02 -07:00
Makefile Fix compilation errors and add fuzzers to CircleCI (#9420) 2022-02-01 10:32:15 -08:00
README.md Fix compilation errors and add fuzzers to CircleCI (#9420) 2022-02-01 10:32:15 -08:00
sst_file_writer_fuzzer.cc Preliminary support for custom compression algorithms (#13659) 2025-06-16 14:19:03 -07:00
util.h Add some missing headers (#10519) 2022-08-11 12:45:50 -07:00

Fuzzing RocksDB

Overview

This directory contains fuzz tests for RocksDB. RocksDB testing infrastructure currently includes unit tests and stress tests, we hope fuzz testing can catch more bugs.

Prerequisite

We use LLVM libFuzzer as the fuzzying engine, so make sure you have clang as your compiler.

Some tests rely on structure aware fuzzing. We use protobuf to define structured input to the fuzzer, and use libprotobuf-mutator as the custom libFuzzer mutator. So make sure you have protobuf and libprotobuf-mutator installed, and make sure pkg-config can find them. On some systems, there are both protobuf2 and protobuf3 in the package management system, make sure protobuf3 is installed.

If you do not want to install protobuf library yourself, you can rely on libprotobuf-mutator to download protobuf for you. For details about installation, please refer to libprotobuf-mutator README

Example

This example shows you how to do structure aware fuzzing to rocksdb::SstFileWriter.

After walking through the steps to create the fuzzer, we'll introduce a bug into rocksdb::SstFileWriter::Put, then show that the fuzzer can catch the bug.

Design the test

We want the fuzzing engine to automatically generate a list of database operations, then we apply these operations to SstFileWriter in sequence, finally, after the SST file is generated, we use SstFileReader to check the file's checksum.

Define input

We define the database operations in protobuf, each operation has a type of operation and a key value pair, see proto/db_operation.proto for details.

Define tests with the input

In sst_file_writer_fuzzer.cc, we define the tests to be run on the generated input:

DEFINE_PROTO_FUZZER(DBOperations& input) {
  // apply the operations to SstFileWriter and use SstFileReader to verify checksum.
  // ...
}

SstFileWriter requires the keys of the operations to be unique and be in ascending order, but the fuzzing engine generates the input randomly, so we need to process the generated input before passing it to DEFINE_PROTO_FUZZER, this is accomplished by registering a post processor:

protobuf_mutator::libfuzzer::PostProcessorRegistration<DBOperations>

In the rocksdb root directory, compile rocksdb library by make static_lib.

Go to the fuzz directory, run make sst_file_writer_fuzzer to generate the fuzzer, it will compile rocksdb static library, generate protobuf, then compile and link sst_file_writer_fuzzer.

Introduce a bug

Manually introduce a bug to SstFileWriter::Put:

diff --git a/table/sst_file_writer.cc b/table/sst_file_writer.cc
index ab1ee7c4e..c7da9ffa0 100644
--- a/table/sst_file_writer.cc
+++ b/table/sst_file_writer.cc
@@ -277,6 +277,11 @@ Status SstFileWriter::Add(const Slice& user_key, const Slice& value) {
 }

 Status SstFileWriter::Put(const Slice& user_key, const Slice& value) {
+  if (user_key.starts_with("!")) {
+    if (value.ends_with("!")) {
+      return Status::Corruption("bomb");
+    }
+  }
   return rep_->Add(user_key, value, ValueType::kTypeValue);
 }

The bug is that for Put, if user_key starts with ! and value ends with !, then corrupt.

Run fuzz testing to catch the bug

Run the fuzzer by time ./sst_file_writer_fuzzer.

Here is the output on my machine:

Corruption: bomb
==59680== ERROR: libFuzzer: deadly signal
    #0 0x109487315 in __sanitizer_print_stack_trace+0x35 (libclang_rt.asan_osx_dynamic.dylib:x86_64+0x4d315)
    #1 0x108d63f18 in fuzzer::PrintStackTrace() FuzzerUtil.cpp:205
    #2 0x108d47613 in fuzzer::Fuzzer::CrashCallback() FuzzerLoop.cpp:232
    #3 0x7fff6af535fc in _sigtramp+0x1c (libsystem_platform.dylib:x86_64+0x35fc)
    #4 0x7ffee720f3ef  (<unknown module>)
    #5 0x7fff6ae29807 in abort+0x77 (libsystem_c.dylib:x86_64+0x7f807)
    #6 0x108cf1c4c in TestOneProtoInput(DBOperations&)+0x113c (sst_file_writer_fuzzer:x86_64+0x100302c4c)
    #7 0x108cf09be in LLVMFuzzerTestOneInput+0x16e (sst_file_writer_fuzzer:x86_64+0x1003019be)
    #8 0x108d48ce0 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) FuzzerLoop.cpp:556
    #9 0x108d48425 in fuzzer::Fuzzer::RunOne(unsigned char const*, unsigned long, bool, fuzzer::InputInfo*, bool*) FuzzerLoop.cpp:470
    #10 0x108d4a626 in fuzzer::Fuzzer::MutateAndTestOne() FuzzerLoop.cpp:698
    #11 0x108d4b325 in fuzzer::Fuzzer::Loop(std::__1::vector<fuzzer::SizedFile, fuzzer::fuzzer_allocator<fuzzer::SizedFile> >&) FuzzerLoop.cpp:830
    #12 0x108d37fcd in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) FuzzerDriver.cpp:829
    #13 0x108d652b2 in main FuzzerMain.cpp:19
    #14 0x7fff6ad5acc8 in start+0x0 (libdyld.dylib:x86_64+0x1acc8)

NOTE: libFuzzer has rudimentary signal handlers.
      Combine libFuzzer with AddressSanitizer or similar for better crash reports.
SUMMARY: libFuzzer: deadly signal
MS: 7 Custom-CustomCrossOver-InsertByte-Custom-ChangeBit-Custom-CustomCrossOver-; base unit: 90863b4d83c3f994bba0a417d0c2ee3b68f9e795
0x6f,0x70,0x65,0x72,0x61,0x74,0x69,0x6f,0x6e,0x73,0x20,0x7b,0xa,0x20,0x20,0x6b,0x65,0x79,0x3a,0x20,0x22,0x21,0x22,0xa,0x20,0x20,0x76,0x61,0x6c,0x75,0x65,0x3a,0x20,0x22,0x21,0x22,0xa,0x20,0x20,0x74,0x79,0x70,0x65,0x3a,0x20,0x50,0x55,0x54,0xa,0x7d,0xa,0x6f,0x70,0x65,0x72,0x61,0x74,0x69,0x6f,0x6e,0x73,0x20,0x7b,0xa,0x20,0x20,0x6b,0x65,0x79,0x3a,0x20,0x22,0x2b,0x22,0xa,0x20,0x20,0x74,0x79,0x70,0x65,0x3a,0x20,0x50,0x55,0x54,0xa,0x7d,0xa,0x6f,0x70,0x65,0x72,0x61,0x74,0x69,0x6f,0x6e,0x73,0x20,0x7b,0xa,0x20,0x20,0x6b,0x65,0x79,0x3a,0x20,0x22,0x2e,0x22,0xa,0x20,0x20,0x74,0x79,0x70,0x65,0x3a,0x20,0x50,0x55,0x54,0xa,0x7d,0xa,0x6f,0x70,0x65,0x72,0x61,0x74,0x69,0x6f,0x6e,0x73,0x20,0x7b,0xa,0x20,0x20,0x6b,0x65,0x79,0x3a,0x20,0x22,0x5c,0x32,0x35,0x33,0x22,0xa,0x20,0x20,0x74,0x79,0x70,0x65,0x3a,0x20,0x50,0x55,0x54,0xa,0x7d,0xa,
operations {\x0a  key: \"!\"\x0a  value: \"!\"\x0a  type: PUT\x0a}\x0aoperations {\x0a  key: \"+\"\x0a  type: PUT\x0a}\x0aoperations {\x0a  key: \".\"\x0a  type: PUT\x0a}\x0aoperations {\x0a  key: \"\\253\"\x0a  type: PUT\x0a}\x0a
artifact_prefix='./'; Test unit written to ./crash-a1460be302d09b548e61787178d9edaa40aea467
Base64: b3BlcmF0aW9ucyB7CiAga2V5OiAiISIKICB2YWx1ZTogIiEiCiAgdHlwZTogUFVUCn0Kb3BlcmF0aW9ucyB7CiAga2V5OiAiKyIKICB0eXBlOiBQVVQKfQpvcGVyYXRpb25zIHsKICBrZXk6ICIuIgogIHR5cGU6IFBVVAp9Cm9wZXJhdGlvbnMgewogIGtleTogIlwyNTMiCiAgdHlwZTogUFVUCn0K
./sst_file_writer_fuzzer  5.97s user 4.40s system 64% cpu 16.195 total

Within 6 seconds, it catches the bug.

The input that triggers the bug is persisted in ./crash-a1460be302d09b548e61787178d9edaa40aea467:

$ cat ./crash-a1460be302d09b548e61787178d9edaa40aea467
operations {
  key: "!"
  value: "!"
  type: PUT
}
operations {
  key: "+"
  type: PUT
}
operations {
  key: "."
  type: PUT
}
operations {
  key: "\253"
  type: PUT
}

Reproduce the crash to debug

The above crash can be reproduced by ./sst_file_writer_fuzzer ./crash-a1460be302d09b548e61787178d9edaa40aea467, so you can debug the crash.

Future Work

According to OSS-Fuzz, as of June 2020, OSS-Fuzz has found over 20,000 bugs in 300 open source projects.

RocksDB can join OSS-Fuzz together with other open source projects such as sqlite.