SkipLists and Lock-Free Concurrency: A Deep Dive into Rust's Implementation

Photo by Henry & Co. / Unsplash

Introduction

In our journey through the LSM (Log-Structured Merge-tree) storage engine series, we've explored various components that make these systems efficient and robust. In Part 2 of our series, we introduced the MemTable, a critical component implemented using the crossbeam-skiplist crate in Rust. This implementation leverages the power of SkipLists to achieve remarkably high-performance concurrency.

But what exactly is a SkipList? How does it enable lock-free concurrency, and why does it perform so well in multi-threaded environments? In this post, we'll peel back the layers of this fascinating data structure and its implementation in Rust, exploring the mechanics that make it a go-to choice for high-performance, concurrent systems like our LSM storage engine.

What is a SkipList?

A SkipList is a probabilistic data structure that allows for fast search within an ordered sequence of elements. It consists of a hierarchy of linked lists, where:

  1. The bottom layer is a regular sorted linked list containing all elements.
  2. Each higher layer acts as an "express lane" for the lists below, skipping over some elements.
  3. The probability of an element appearing in each level decreases exponentially with the level.

This structure allows for O(log n) search times on average, as the search can quickly skip large portions of the list using the higher levels. The space complexity of a SkipList is O(n), which is comparable to other efficient search structures.

The Challenge of Lock-Free Data Structures

Implementing lock-free data structures presents a significant challenge: how to manage memory safely when multiple threads can access and modify the structure concurrently. This is where atomic operations and epoch-based reclamation come into play.

Compare-And-Swap: The Foundation of Lock-Free Algorithms

At the heart of many lock-free algorithms lies a powerful CPU instruction: Compare-And-Swap (CAS). This atomic operation is crucial for implementing lock-free data structures. CAS allows a thread to check if a memory location contains an expected value and, if so, update it to a new value, all in a single, uninterruptible step.

In Rust, CAS is typically used through the compare_exchange method on atomic types. Here's a simplified example:

use std::sync::atomic::{AtomicUsize, Ordering};

let atomic = AtomicUsize::new(5);

match atomic.compare_exchange(5, 10, Ordering::SeqCst, Ordering::SeqCst) {
    Ok(_) => println!("Value updated successfully"),
    Err(actual) => println!("Update failed, current value is {}", actual),
}

You might notice the Ordering parameter in the compare_exchange method. This specifies the memory ordering constraints for the operation, which is crucial for correct behavior in concurrent scenarios. Choosing the right memory ordering is a complex topic that requires careful consideration. For those interested in delving deeper into this subject, Chapter 3: Memory Ordering in the book "Rust Atomics and Locks" by Mara Bos provides an excellent, in-depth exploration.

Epoch-Based Reclamation: The Crossbeam-epoch Crate

Before diving into the SkipList implementation, it's crucial to understand the epoch-based reclamation technique used by the crossbeam-epoch crate.

Introduction to Epochs

In concurrent programming, an epoch is a period of time during which certain operations or states are considered valid. Epoch-based reclamation is a technique used for safe memory management in lock-free data structures. It allows threads to safely determine when it's okay to reclaim memory that's no longer in use.

The key idea is to divide time into epochs and ensure that memory is only reclaimed when no thread could possibly be accessing it. This approach provides a balance between performance and safety in concurrent systems.

How Crossbeam-epoch Works

The crossbeam-epoch crate implements epoch-based reclamation through a combination of global and local epoch counters, pinning operations, and deferred destruction. This allows for efficient concurrent access without the need for expensive synchronization on every operation.

Lock-Free SkipList Implementation Using Crossbeam-epoch

Now let's examine how the crossbeam-skiplist crate implements a lock-free SkipList using these concepts.

SkipList Structure

use crossbeam_epoch::{Atomic, Owned, Shared, Guard};

pub struct SkipList<K, V> {
    head: Atomic<Node<K, V>>,
    tail: Atomic<Node<K, V>>,
    height: AtomicUsize,
    rng: ThreadLocal<RefCell<SmallRng>>,
}

struct Node<K, V> {
    key: K,
    value: Atomic<V>,
    next: Vec<Atomic<Node<K, V>>>,
    prev: Atomic<Node<K, V>>,
}

This structure shows that the SkipList maintains head and tail pointers, a current height, and a thread-local random number generator. Each node contains the key, an atomic value, a vector of next pointers for different levels, and a prev pointer.

Insertion

Here's a simplified version of the insert method:

impl<K, V> SkipList<K, V>
where
    K: Ord + Clone,
    V: Clone,
{
    pub fn insert(&self, key: K, value: V) -> Option<V> {
        let guard = &epoch::pin();
        let mut height = 0;
        let mut tower = Vec::new();

        'retry: loop {
            let mut pred = &self.head;
            let mut level = self.height.load(Ordering::Relaxed);

            // Search for the insertion point
            while level > 0 {
                level -= 1;
                let mut curr = pred.next[level].load(Ordering::Acquire, guard);
                
                while let Some(current) = unsafe { curr.as_ref() } {
                    if current.key < key {
                        pred = curr;
                        curr = current.next[level].load(Ordering::Acquire, guard);
                    } else {
                        break;
                    }
                }
                
                tower.push(Atomic::null());
            }

            // Create a new node
            let node = Node {
                key: key.clone(),
                value: Atomic::new(value.clone()),
                next: tower,
                prev: Atomic::null(),
            };
            let new = Owned::new(node).into_shared(guard);

            // Try to insert the new node
            let next = pred.next[0].load(Ordering::Acquire, guard);
            if pred.next[0].compare_exchange(
                next,
                new,
                Ordering::Release,
                Ordering::Relaxed,
                guard,
            ).is_ok() {
                // Update other pointers and potentially increase the height
                // ... (omitted for brevity)
                return None;
            } else {
                // Insertion failed, clean up and retry
                unsafe { guard.defer_destroy(new) };
                continue 'retry;
            }
        }
    }
}

This insertion method demonstrates several key aspects of lock-free operations:

  1. Epoch-based Protection: The epoch::pin() call at the beginning creates a guard that ensures no memory is freed while this operation is in progress.
  2. Optimistic Insertion: The method first searches for the insertion point without any locks, preparing for the insertion.
  3. Compare-and-Exchange: The actual insertion is performed using a compare-and-exchange operation, which is the core of the lock-free algorithm.
  4. Retry Mechanism: If the insertion fails (due to concurrent modifications), the entire operation is retried from the beginning.
  5. Memory Management: If the insertion fails, the newly created node is scheduled for deferred destruction using guard.defer_destroy().
  6. No Blocking: At no point does this method block or wait for a lock, allowing for high concurrency.

This approach allows multiple threads to attempt insertions simultaneously. If there's a conflict, the compare-and-exchange will fail for all but one thread, and those threads will retry their operations.

Retrieval

The get method is implemented as follows:

impl<K, V> SkipList<K, V>
where
    K: Ord + Clone,
    V: Clone,
{
    pub fn get<Q>(&self, key: &Q) -> Option<V>
    where
        K: Borrow<Q>,
        Q: Ord + ?Sized,
    {
        let guard = &epoch::pin();
        let mut pred = &self.head;
        let mut curr = Shared::null();
        let mut level = self.height.load(Ordering::Relaxed);

        while level > 0 {
            level -= 1;
            loop {
                let succ = pred.next[level].load(Ordering::Acquire, guard);
                if succ.is_null() {
                    break;
                }
                curr = succ;

                match unsafe { curr.as_ref().unwrap().key.borrow().cmp(key) } {
                    Ordering::Less => pred = curr,
                    Ordering::Equal => {
                        return unsafe { curr.as_ref() }
                            .unwrap()
                            .value
                            .load(Ordering::Acquire, guard)
                            .map(|v| v.clone())
                    }
                    Ordering::Greater => break,
                }
            }
        }
        None
    }
}

This method demonstrates several key aspects of safe concurrent traversal:

  1. Epoch-based Protection: The epoch::pin() call at the beginning creates a guard that ensures no memory is freed while this operation is in progress.
  2. Atomic Loads: All node accesses use atomic loads with Ordering::Acquire, ensuring that we always see the most up-to-date state of the list.
  3. Immutable Traversal: The method only reads data, never modifying the list structure. This means it can safely run concurrently with other operations without the need for locks.
  4. Handling Concurrent Modifications:
  5. If a node is deleted during traversal, the next pointer will simply skip over it.
  6. If a new node is inserted, it might be missed in this traversal, but this is acceptable in a lock-free context as it maintains a consistent view of some recent state of the list.
  7. Safe Memory Access: The use of Shared pointers and the guard ensures that even if a node is logically deleted, the memory remains valid for the duration of this operation.
  8. Optimistic Traversal: The method starts from the highest level and works its way down, potentially skipping large portions of the list, which is key to the SkipList's efficiency.

This approach allows for efficient, safe traversal in a highly concurrent environment. Multiple threads can perform get operations simultaneously without blocking each other, and insertions or deletions happening concurrently will not cause the traversal to fail or return inconsistent results.

The Power of Lock-Free SkipLists

The lock-free SkipList implementation we've explored offers several key advantages. By leveraging atomic operations and compare-and-swap (CAS) instructions, it eliminates the need for traditional locks or mutexes. This approach ensures that at least one thread can always make progress, even if others are suspended, providing a strong progress guarantee. The use of epoch-based memory management allows for safe memory reclamation without blocking, further enhancing performance. These characteristics result in a highly concurrent data structure that scales well with the number of cores and remains responsive even under high contention. The absence of locks means reduced overhead and improved throughput, making it an excellent choice for high-performance systems like our LSM storage engine's MemTable.

Conclusion

The combination of SkipList's inherent parallelism-friendly structure and Crossbeam's epoch-based reclamation results in a highly efficient, lock-free concurrent data structure. This implementation showcases how modern programming languages and libraries can leverage advanced concepts like Compare-And-Swap operations and epoch-based reclamation to create high-performance, safe concurrent systems.

As we continue to move towards increasingly parallel computing environments, data structures like this lock-free SkipList will play a crucial role in building efficient and scalable systems. In the context of our LSM storage engine, the use of crossbeam-skiplist for the MemTable implementation allows us to handle concurrent reads and writes with remarkable efficiency, contributing significantly to the overall performance of our storage system.

Understanding these underlying mechanisms not only helps in appreciating the elegance of the solution but also equips us with the knowledge to make informed decisions when designing and implementing high-performance concurrent systems. The lock-free SkipList serves as a prime example of how careful design and modern hardware capabilities can be harnessed to create data structures that thrive in concurrent environments.

Haili Zhang

Haili Zhang