forked from continuwuation/rocksdb
Summary: PointLockManager manages point lock per key. The old implementation partition the per key lock into 16 stripes. Each stripe handles the point lock for a subset of keys. Each stripe have only one conditional variable. This conditional variable is used by all the transactions that are waiting for its turn to acquire a lock of a key that belongs to this stripe. In production, we notified that when there are multiple transactions trying to write to the same key, all of them will wait on the same conditional variables. When the previous lock holder released the key, all of the transactions are woken up, but only one of them could proceed, and the rest goes back to sleep. This wasted a lot of CPU cycles. In addition, when there are other keys being locked/unlocked on the same lock stripe, the problem becomes even worse. In order to solve this issue, we implemented a new PerKeyPointLockManager that keeps a transaction waiter queue at per key level. When a transaction could not acquire a lock immediately, it joins the waiter queue of the key and waits on a dedicated conditional variable. When previous lock holder released the lock, it wakes up the next set of transactions that are eligible to acquire the lock from the waiting queue. The queue respect FIFO order, except it prioritizes lock upgrade/downgrade operation. However, this waiter queue change increases the deadlock detection cost, because the transaction waiting in the queue also needs to be considered during deadlock detection. To resolve this issue, a new deadlock_timeout_us (microseconds) configuration is introduced in transaction option. Essentially, when a transaction is waiting on a lock, it will join the wait queue and wait for the duration configured by deadlock_timeout_us without perform deadlock detection. If the transaction didn't get the lock after the deadlock_timeout_us timeout is reached, it will then perform deadlock detection and wait until lock_timeout is reached. This optimization takes the heuristic where majority of the transaction would be able to get the lock without perform deadlock detection. The deadlock_timeout_us configuration needs to be tuned for different workload, if the likelihood of deadlock is very low, the deadlock_timeout_us could be configured close to a big higher than the average transaction execution time, so that majority of the transaction would be able to acquire the lock without performing deadlock detection. If the likelihood of deadlock is high, deadlock_timeout_us could be configured with lower value, so that deadlock would get detected faster. The new PerKeyPointLockManager is disabled by default. It can be enabled by TransactionDBOptions.use_per_key_point_lock_mgr. The deadlock_timeout_us is only effective when PerKeyPointLockManager is used. When deadlock_timeout_us is set to 0, transaction will perform deadlock detection immediately before wait. Pull Request resolved: https://github.com/facebook/rocksdb/pull/13731 Test Plan: Unit test. Stress unit test that validates deadlock detection and exclusive, shared lock guarantee. A new point_lock_bench binary is created to help perform performance test. Reviewed By: pdillinger Differential Revision: D77353607 Pulled By: xingbowang fbshipit-source-id: 21cf93354f9a367a78c8666596ed14013ac7240b
132 lines
3.7 KiB
C++
132 lines
3.7 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
#include "utilities/transactions/transaction_db_mutex_impl.h"
|
|
|
|
#include <chrono>
|
|
#include <condition_variable>
|
|
#include <mutex>
|
|
#include <sstream>
|
|
#include <thread>
|
|
|
|
#include "rocksdb/utilities/transaction_db_mutex.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
class TransactionDBMutexImpl : public TransactionDBMutex {
|
|
public:
|
|
TransactionDBMutexImpl() = default;
|
|
~TransactionDBMutexImpl() override = default;
|
|
|
|
Status Lock() override;
|
|
|
|
Status TryLockFor(int64_t timeout_time) override;
|
|
|
|
void UnLock() override { mutex_.unlock(); }
|
|
|
|
friend class TransactionDBCondVarImpl;
|
|
|
|
private:
|
|
std::mutex mutex_;
|
|
};
|
|
|
|
class TransactionDBCondVarImpl : public TransactionDBCondVar {
|
|
public:
|
|
TransactionDBCondVarImpl() = default;
|
|
~TransactionDBCondVarImpl() override = default;
|
|
|
|
Status Wait(std::shared_ptr<TransactionDBMutex> mutex) override;
|
|
|
|
Status WaitFor(std::shared_ptr<TransactionDBMutex> mutex,
|
|
int64_t timeout_time) override;
|
|
|
|
void Notify() override { cv_.notify_one(); }
|
|
|
|
void NotifyAll() override { cv_.notify_all(); }
|
|
|
|
private:
|
|
std::condition_variable cv_;
|
|
};
|
|
|
|
std::shared_ptr<TransactionDBMutex>
|
|
TransactionDBMutexFactoryImpl::AllocateMutex() {
|
|
return std::shared_ptr<TransactionDBMutex>(new TransactionDBMutexImpl());
|
|
}
|
|
|
|
std::shared_ptr<TransactionDBCondVar>
|
|
TransactionDBMutexFactoryImpl::AllocateCondVar() {
|
|
return std::shared_ptr<TransactionDBCondVar>(new TransactionDBCondVarImpl());
|
|
}
|
|
|
|
Status TransactionDBMutexImpl::Lock() {
|
|
mutex_.lock();
|
|
return Status::OK();
|
|
}
|
|
|
|
Status TransactionDBMutexImpl::TryLockFor(int64_t timeout_time) {
|
|
bool locked = true;
|
|
|
|
if (timeout_time == 0) {
|
|
locked = mutex_.try_lock();
|
|
} else {
|
|
// Previously, this code used a std::timed_mutex. However, this was changed
|
|
// due to known bugs in gcc versions < 4.9.
|
|
// https://gcc.gnu.org/bugzilla/show_bug.cgi?id=54562
|
|
//
|
|
// Since this mutex isn't held for long and only a single mutex is ever
|
|
// held at a time, it is reasonable to ignore the lock timeout_time here
|
|
// and only check it when waiting on the condition_variable.
|
|
mutex_.lock();
|
|
}
|
|
|
|
if (!locked) {
|
|
// timeout acquiring mutex
|
|
return Status::TimedOut(Status::SubCode::kMutexTimeout);
|
|
}
|
|
|
|
return Status::OK();
|
|
}
|
|
|
|
Status TransactionDBCondVarImpl::Wait(
|
|
std::shared_ptr<TransactionDBMutex> mutex) {
|
|
auto mutex_impl = static_cast<TransactionDBMutexImpl*>(mutex.get());
|
|
|
|
std::unique_lock<std::mutex> lock(mutex_impl->mutex_, std::adopt_lock);
|
|
cv_.wait(lock);
|
|
|
|
// Make sure unique_lock doesn't unlock mutex when it destructs
|
|
lock.release();
|
|
|
|
return Status::OK();
|
|
}
|
|
|
|
Status TransactionDBCondVarImpl::WaitFor(
|
|
std::shared_ptr<TransactionDBMutex> mutex, int64_t timeout_time) {
|
|
Status s;
|
|
|
|
auto mutex_impl = static_cast<TransactionDBMutexImpl*>(mutex.get());
|
|
std::unique_lock<std::mutex> lock(mutex_impl->mutex_, std::adopt_lock);
|
|
|
|
if (timeout_time < 0) {
|
|
// If timeout is negative, do not use a timeout
|
|
cv_.wait(lock);
|
|
} else {
|
|
auto duration = std::chrono::microseconds(timeout_time);
|
|
auto cv_status = cv_.wait_for(lock, duration);
|
|
|
|
// Check if the wait stopped due to timing out.
|
|
if (cv_status == std::cv_status::timeout) {
|
|
s = Status::TimedOut(Status::SubCode::kMutexTimeout);
|
|
}
|
|
}
|
|
|
|
// Make sure unique_lock doesn't unlock mutex when it destructs
|
|
lock.release();
|
|
|
|
// CV was signaled, or we spuriously woke up (but didn't time out)
|
|
return s;
|
|
}
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|