# Skip lists

There were a number of additions to the Java collections library in Java SE 6:

- A new
**d**ouble-**e**nded queue interface called Deque (pronounced “deck”) and its concurrent counterpart BlockingDeque, with implementations like ArrayDeque and LinkedBlockingDeque. - New and improved interfaces for sorted sets and maps called NavigableSet and NavigableMap with several prior implementations like TreeSet and TreeMap retrofitted to support them.
- New combined interfaces for concurrent
**and**sorted interfaces: ConcurrentNavigableSet, ConcurrentNavigableMap . Plus two new concurrent sorted implementations called ConcurrentSkipListSet and ConcurrentSkipListMap.

The latter two piqued my curiosity a while back when I started preparing for my recent collections talk at NFJS. What the heck is a “skip list” anyways?

I did some digging and became fairly fascinated with this data structure. It was actually invented relatively recently (1990) and by none other than Bill Pugh, well known in the Java community for his work on the Java Memory Model, FindBugs, etc.

It’s easiest to build up the concept of a skip list starting from a simple linked list:

Linked lists are great for insertion and removal but not so hot when you need to check containment. In that case, you have to walk through the list until you find a match (or don’t), which is O(n). If the list is sorted, you still have to walk through the list, but you can sometimes stop sooner in the failure case if you reach a value bigger than the one being searched for.

A wonderful analogy to use here (which I stole completely from Prof. Leiserson’s skip list lecture (mp3)) is that of a subway line. Picture the sorted linked list as a subway line that you are riding to find your stop. Our goal is to make it faster to ride the line, while still being able to service all of the stops.

We look here for inspiration to real-life subway lines and how they provide a faster ride: they add an extra express line that *doesn’t stop at all the stops*:

A subset of the stops are duplicated up onto an express line. Our algorithm then becomes to ride the express line until we’ve gone too far, go back a stop and down to the main line until we get to our stop. We need to decide how many stops to put on the express line of course and the intuition there is to balance the time we spending riding the express line vs the time on the main line. It turns out that putting exactly half of the stops on the express line is ideal.

This is good and seems like it makes things a bit faster, but not much faster. How can we make it better? Why stop at express? We can add a turbo line!

It turns out again, that adding a turbo line is best when we add half of the stops on the express line. Our algorithm expands to start on the turbo line, go as far as possible, change to the express line, go as far as possible, and finally change to the main line and get off at our stop.

Where do we stop? Again, there is math involved, none of which I will reproduce, but it turns out the ideal number of lines is log(n). This should start to set off some bells in your head if you’ve spent any time messing with trees. The height of a balanced binary tree is log(n). If you squint a bit, you can see that this set of lists starts small, then has twice as many elements at each level, kind of like….a tree! But not a tree. You have to squint, like I said.

How does this affect our search time? In an ideal skip list structure, search time is O(log n), certainly better than O(n) we started with. Perhaps more interestingly, the search time is O(log n) with an extremely high probability. Even in the case of a non-existent element, we can discover that in O(log n) time.

Of course, this only works in an “ideal” skip list. How do we handle inserts and removals to create that skip list? Close your eyes and brainstorm for a second….

If you gave it a shot, you probably came up with some ideas about looking at the size of each list and/or spacing in each line and trying to make a decision about when to move new items up through the lines.

It turns out that there is an even easier answer. Our ideal structure requires 1/2 of the elements in line n to be moved up to line n+1. So our algorithm for insertion says to add a new item to the main line (has to be there) and then flip a coin. If the coin comes up heads, we move it up to the next line and repeat. If a line doesn’t exist yet, we make a new line. If we get tails, we stop. In this way, we build an ideal skip list structure (in the global sense) only by making local random decisions. In other words, the global structure is emergent. Very cool.

Removal entails just removing the item from all of the lists. Insertion and removal both work as O(log n) operations.

Another very nice property of this data structure is that if we’re careful, we can perform insert and removal operations without locking the global data structure. At every point, we are just modifying a linked list, which can be done by carefully manipulating the pointer structure and using CAS (compare and swap) operations like those available on the Atomic classes in java.util.concurrent. Algorithms like this are called lock-free and make **no** use of Java synchronized or wait/notify.

CAS operations have very fast equivalents in hardware and most importantly they do not block. They are also generally very tricky to implement correctly. If you’re interested in the details, I would invite you to look at the source code for ConcurrentSkipListMap which is probably the best piece of complex Java code I’ve ever read. It is a truly beautiful piece of work. You could teach a class based on this source file alone on how to write complex code yet make it approachable and maintainable. Major major kudos to the JSR 166 group for giving us a gift like this in the JDK. Seriously, I would urge you to spend some time looking at this.

Note that while this data structure and implementation rock for concurrent sorted maps/sets, something like ConcurrentHashMap is way better if you don’t need things sorted. But that’s perhaps the topic for another blog…

Further resources:

yup, fascinating data structure….wtg Bill Pugh

This blog post helped me today. Thanks for writing it.

But still, the list needs to be sorted to find the right element. Therefore one could use a sorted Set/Map and the binary search algorithm, wich will find the element in O(log n) as well. Wheres the advantage in using SkipMap ?

@Spygroove: I’m not sure what alternative you’re suggesting. The extra skip lists are the thing that allows you to do the “binary search algorithm” part. If you just have a linked list w/o the skip lists, you can’t “skip” to the right part and you degrade to O(n). If you use a positional data structure like an array instead, then you have killed your insert performance (back to O(n)).

If you want to compare it to something, it is useful to compare it to tree-based sorted structures (like red-black trees). Skip lists have similar performance characteristics but a) it can be proven that they have tighter bounds (generally better performance) and b) skip lists have isolated locks yielding far greater concurrency than most tree-based structures.