rocksdb/table/compaction_merging_iterator.h
Andrew Chang 62531da510 Track the total number of compaction sorted runs from inside CompactionMergingIterator (#13325)
Summary:
**This PR adds a new statistic to track the total number of sorted runs for running compactions.**

Context: I am currently working on a separate project, where I am trying to tune the read request sizes made by `FilePrefetchBuffer` to the storage backend. In this particular case, `FilePrefetchBuffer` will issue larger reads and have to buffer larger read responses. This means we expect to see higher memory utilization. At least for the initial rollout, we only want to enable this optimization for compaction reads.

**I want some way to get a sense of what the memory usage _impact_ will be if the prefetch read request size is increased from (for instance) 8MB to 64MB.**

**If I know the number of files that compactions are actively reading from (i.e. the number of sorted runs / "input iterators"), I can determine how much the memory usage will increase if I bump up the readahead size inside `FilePrefetchBuffer`.** For instance, if there are 16 sorted runs at any given point in time and I bump up the readahead size by 64MB, I can project an increase of 16 * 64 MB.

In most cases, the number of sorted runs processed per compaction is the number of L0 files plus the number of non-L0 levels. However, we need to be aware of exceptions like trivial compactions, deletion compactions, and subcompactions. This is a major reason why this PR chooses to implement the stats counting inside `CompactionMergingIterator`, since by the time we get down to that part of the stack, we know the "true" values for the number of input iterators / sorted runs.

Alternatives considered:
- https://github.com/facebook/rocksdb/issues/13299 gives you a histogram for the number of sorted runs ("input iterators") for a _single compaction_. While this statistic is interested and in the direction of what we want, we are going to be assessing the memory impact across _all_ compactions that are currently running. Thus, this statistic does not give us all the information we need.
- https://github.com/facebook/rocksdb/issues/13302 gives you the total prefetch buffer memory usage, but it doesn't tell you what happens when the readahead size is increased. Furthermore, the code change is error prone and very "invasive" -- look at how many places in the code had to be updated. This would be useful in the future for general memory accounting purposes, but it does not serve our immediate needs.
- https://github.com/facebook/rocksdb/issues/13320 aimed to track the same metric, but did this inside `DbImpl:: BackgroundCallCompaction`. It turns out that this does not handle the case where a compaction is divided into multiple subcompactions (in which case, there would be _more_ sorted runs being processed at the same time than you would otherwise predict.) The current PR handles subcompactions automatically, and I think it is cleaner overall.

Note: When I attempted to put this statistic as part of the `cf_stats_value_` array, even after updating the array to use `std::atomic<uint64_t>`, I still was able to get assertions to _fail_ inside the crash tests. These assertions checked that the unsigned integer would not underflow below zero during compaction. I experimented for many hours but could not figure out a solution, even though it would seem like things "should" work with `fetch_add` and `fetch_sub`. One possibility is that the values in `cf_stats_value_` are being cleared to 0, but I added a `fprintf` to that portion of the code and didn't see it getting printed out before my assertions failed. Regardless, I think that this statistic is different enough from the CF-specific and the other DB-wide stats that the best solution is to just have it defined as a separate `std::atomic<uint64_t>`. I also do not want to spend more hours trying to debug why the crash test assertions break, when the solution in the current version of the PR can get the assertions to consistently pass.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/13325

Test Plan:
- I updated one unit test to confirm that `num_running_compaction_sorted_runs` starts and ends at 0. This checks that all the additions and subtractions cancel out. I also made sure the statistic got incremented at least once.
- When I added `fprintf` manually, I confirmed that my statistics updating code was being exercised numerous times inside `db_compaction_test`. I printed out the results before and after the increments/decrements, and the numbers looked good.
- We will monitor the generated statistics after this PR is merged.
- There are assertion checks after each increment and before each decrement. If there are bugs, the crash test will almost certainly find them, since they quickly found issues with my initial implementation for this PR which tried using the `cf_stats_value_` array (modified to use `std::atomic`).

Reviewed By: anand1976, hx235

Differential Revision: D68527895

Pulled By: archang19

fbshipit-source-id: 135cf210e0ff1550ea28ae4384d429ae620b1784
2025-02-06 13:25:51 -08:00

46 lines
2 KiB
C++

// Copyright (c) Meta Platforms, Inc. and affiliates.
//
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
#pragma once
#include "db/range_del_aggregator.h"
#include "rocksdb/slice.h"
#include "rocksdb/types.h"
#include "table/merging_iterator.h"
namespace ROCKSDB_NAMESPACE {
/*
* This is a simplified version of MergingIterator and is specifically used for
* compaction. It merges the input `children` iterators into a sorted stream of
* keys. Range tombstone start keys are also emitted to prevent oversize
* compactions. For example, consider an L1 file with content [a, b), y, z,
* where [a, b) is a range tombstone and y and z are point keys. This could
* cause an oversize compaction as it can overlap with a wide range of key space
* in L2.
*
* CompactionMergingIterator emits range tombstone start keys from each LSM
* level's range tombstone iterator, and for each range tombstone
* [start,end)@seqno, the key will be start@seqno with op_type
* kTypeRangeDeletion unless truncated at file boundary (see detail in
* TruncatedRangeDelIterator::start_key()).
*
* Caller should use CompactionMergingIterator::IsDeleteRangeSentinelKey() to
* check if the current key is a range tombstone key.
* TODO(cbi): IsDeleteRangeSentinelKey() is used for two kinds of keys at
* different layers: file boundary and range tombstone keys. Separate them into
* two APIs for clarity.
*/
class CompactionMergingIterator;
class InternalStats;
InternalIterator* NewCompactionMergingIterator(
const InternalKeyComparator* comparator, InternalIterator** children, int n,
std::vector<std::pair<std::unique_ptr<TruncatedRangeDelIterator>,
std::unique_ptr<TruncatedRangeDelIterator>**>>&
range_tombstone_iters,
Arena* arena = nullptr, InternalStats* stats = nullptr);
} // namespace ROCKSDB_NAMESPACE