## True Story Follows

So I’ve been playing around with skew binomial queues in Go, and the results so far have been impractical because of performance reasons. Dequeue’ing in particular is quite slow, even if the time complexity is logarithmic. To give a watered down explanation, the dequeue operation involves removing a root node and then merging all of its children, and the merges are what end up creating the logarithmic complexity. This introduces a few ideas:

• In practice, if I only wanted to dequeue 1 item, then ideally the merge operation would happen in the background after I’ve grabbed the item I want from the datastructure
• If I have multiple threads consuming from the queue, I could theoretically consume from the root and then start consuming from its children as well and continue deferring the merge operation.
• Eventually, dequeueing from a root’s children would become increasingly inefficient as every dequeue operation would add more child nodes to a list of nodes that we needed to scan from
• Therefore, there is some point at which an equilibrium could exist in a world of concurrent operations where some optimal number of free list nodes existed as merge operations occurred in the background. Expressed in another way, when individual merges become cheaper than scanning through a linked list to dequeue the highest priority item, we have reached equilibrium.

Anyway, the point is, the above idea presumes that we have a datastructure that’s extremely cheap to append to and dequeue from for multiple workers. Locks can be expensive, so rather than lock for every mutation operation against the list, it would be better to use compare and swap operations to atomically modify nodes. Inserting is actually fairly intuitive, but deleting quickly becomes problematic. I thought about this for several hours, but then it turns out there was a wikipedia article about it the whole time. Here I was thinking I was being innovative with computer science, but if you ever think you’re smart, there’s always someone smarter than you. That’s why you should lift weights. Because then, usuaully, if you meet the people that are smarter than you, there’s a very high probability that you can bench press more than them, and now you’re on equal footing.

## Intro to Compare and Swap: Spinlocks

The core idea behind lock free datastructures is to take advantage of atomic operations offered by the processor. There are 3 such operations, but the one I care about is compare and swap. The idea is that we set a memory address to a particular value only if its current value is equal to the input we provide in the operation. So peep this spinlock:

```package main

import (
"sync/atomic"
)

type SpinLock struct {
state int32
}

func (s *SpinLock) TryLock() bool {
return atomic.CompareAndSwapInt32(&s.state, 0, 1)
}

func (s SpinLock) IsLocked() bool {
return s.state == 1
}

func (s *SpinLock) Unlock() {
s.state = 0
}
```

In C++ you can use atomic flags which compare and swap booleans, but Go’s atomic package offers less functionality. However, you can still accomplish the same things. Here, you can effectively lock a resource without making a sys call through the operating system and affecting the scheduler and all that. To use the above struct, you can just do an infinite loop as long as TryLock() fails. The assumption for this case is that a resource will only be locked for a very short duration.

Suppose we have a simple linked list:

```package main

import (
"unsafe"
)

}

type listNode struct {
markableNext *markablePointer
object       unsafe.Pointer
}

type markablePointer struct {
marked bool
next   *listNode
}
```

Inserting a new element is fairly straightforward. Suppose I want to insert a new node between nodes “Before Node” and “After Node”. I can create a new node that points to “After Node”. Then, I can compare and swap the next pointer from “Before Node” to my new node. If the operation succeeds, we can stop there, go ahead and meet the bros down at the keg, and exchange high fives. If the operation fails, we can run our hands through our hair and express great frustration, and then simply redo the operation; the node that we created initially will be garbage collected and evicerated from this world as we know it, only to have existed for a brief moment in time like dust in the wind.

Deleting gets a little bit bananas, and that’s why we have the “markablePointer”. Suppose we have “Before Node”, “Delete Me Node”, and “After Node”. To delete the middle node, I could update the “Before Node” next point to reference “After Node”. But, what if, at that exact moment im time, on that exact same process, on some other random thread, “Annoying Node” was appended to “Delete Me Node”. Now, our delete operation deleted two separate nodes.

What we can do instead is to soft delete the node in question initially, and then subsequently hard delete the node. We can do this by marking the next pointer in the “Delete Me Node”. Now any concurrent threads will know they are not allowed to append anything to “Delete Me Node”, but they can still use it to traverse the list. And now, it’s safe to link from “Before Node” to “After Node”. If any of these operations fail during a compare and swap, we just start over.

## Example Insert Code

```package main

import (
"sync/atomic"
"unsafe"
)

func (t *ThreadSafeList) InsertObject(object unsafe.Pointer, lessThanFn func(unsafe.Pointer, unsafe.Pointer) bool) {

newNode := listNode{
object: object,
markableNext: &markablePointer{
},
}

operationSucceeded := atomic.CompareAndSwapPointer(
unsafe.Pointer(&newNode),
)
if !operationSucceeded {
t.InsertObject(object, lessThanFn)
return
}
return
}

for {
if cursor.markableNext.next == nil || lessThanFn(object, cursor.markableNext.next.object) {
currentNext := cursor.markableNext
if currentNext.marked {
continue
}
newNode := listNode{
object: object,
markableNext: &markablePointer{
next: currentNext.next,
},
}
newNext := markablePointer{
next: &newNode,
}
operationSucceeded := atomic.CompareAndSwapPointer(
(*unsafe.Pointer)(unsafe.Pointer(&(cursor.markableNext))),
unsafe.Pointer(currentNext),
unsafe.Pointer(&newNext),
)
if !operationSucceeded {
t.InsertObject(object, lessThanFn)
return
}
break
}
cursor = cursor.markableNext.next
}
}
```

## Example Delete Code

```package main

import (
"sync/atomic"
"unsafe"
)

func (t *ThreadSafeList) DeleteObject(object unsafe.Pointer) {
var previous *listNode
for {
if cursor == nil {
break
}
if cursor.object == object {
nextNode := cursor.markableNext.next
newNext := markablePointer{
marked: true,
next:   nextNode,
}
operationSucceeded := atomic.CompareAndSwapPointer(
(*unsafe.Pointer)(unsafe.Pointer(&(cursor.markableNext))),
unsafe.Pointer(cursor.markableNext),
unsafe.Pointer(&newNext),
)
if !operationSucceeded {
t.DeleteObject(object)
return
}
newNext = markablePointer{
next: nextNode,
}
if previous != nil {
operationSucceeded = atomic.CompareAndSwapPointer(
(*unsafe.Pointer)(unsafe.Pointer(&(previous.markableNext))),
unsafe.Pointer(previous.markableNext),
unsafe.Pointer(&newNext),
)
} else {
operationSucceeded = atomic.CompareAndSwapPointer(
unsafe.Pointer(nextNode),
)
}
if !operationSucceeded {
t.DeleteObject(object)
}
break
}

previous = cursor
cursor = cursor.markableNext.next
}
}
```

## Conclusion

Thread safe non-blocking linked lists are great except for when technology goes horribly wrong.