您的位置:首页 > 编程语言 > Go语言

Non-blocking algorithm(非阻塞算法,非阻塞同步的算法实现)

2014-08-26 09:17 309 查看


Non-blocking algorithm

In computer science, a non-blocking algorithm ensures that threads competing
for a shared resource do not have their execution indefinitely
postponed by mutual exclusion.(在计算机科学中,一个非阻塞算法可以保证在线程竞争共享资源的时候不会因为互斥的使用导致线程被无限制的延迟执行。) A
non-blockingalgorithm is lock-free if there is guaranteed system-wide progress regardless
of scheduling; wait-free if there is also guaranteed per-thread progress.(不考虑调度情况下,如果系统级别的progress得到保证的话,则这个算法是lock-free的,如果线程级别的progress得到了保证,则这是wait-free的。系统级别的progress可以暂时定为系统中的进程的执行路径,线程级别的progress暂时定为线程的执行路径,从层面上来考虑的话,系统级别的progress要比线程级别的progress更高一层。因为wait-free本来就比lock-free更难实现,更严格。)

Literature up to the turn of the 21st century used "non-blocking" synonymously with lock-free. However, since 2003,[1] the
term has been weakened to only prevent progress-blocking interactions with a preemptive
scheduler. In modern usage, therefore, an algorithm is non-blocking if the suspension of one or more threads will not stop the potential progress of the remaining threads. They are designed to avoid requiring a critical
section. Often, these algorithms allow multiple processes to make progress on a problem without ever blocking each other. For some operations, these algorithms provide an alternative to locking
mechanisms.

Motivation[edit]

Main article: Disadvantages of locks

The traditional approach to multi-threaded programming is to use locks to synchronize
access to shared resources. Synchronization primitives such as mutexessemaphores,
and critical sections are all mechanisms by which a programmer can ensure that certain sections
of code do not execute concurrently, if doing so would corrupt shared memory structures. If one thread attempts to acquire a lock that is already held by another thread, the thread will block until the lock is free.

Blocking a thread is undesirable for many reasons. An obvious reason is that while the thread is blocked, it cannot accomplish anything. If the blocked thread was performing a high-priority orreal-time task,
it would be highly undesirable to halt its progress. Other problems are less obvious. Certain interactions between locks can lead to error conditions such as deadlocklivelock,
and priority inversion. Using locks also involves a trade-off between coarse-grained
locking, which can significantly reduce opportunities for parallelism, and fine-grained
locking, which requires more careful design, increases locking overhead and is more prone to bugs.

Non-blocking algorithms are also safe for use in interrupt handlers: even though the preempted thread
cannot be resumed, progress is still possible without it. In contrast, global data structures protected by mutual exclusion cannot safely be accessed in a handler, as the preempted thread may be the one holding the lock.


Implementation[edit]

With few exceptions, non-blocking algorithms use atomic read-modify-write primitives
that the hardware must provide, the most notable of which is compare and swap (CAS). Critical
sectionsare almost always implemented using standard interfaces over these primitives. Until recently, all non-blocking algorithms had to be written "natively" with the underlying primitives to achieve acceptable performance. However, the emerging field
of software transactional memory promises standard abstractions
for writing efficient non-blocking code. [2][3]

Much research has also been done in providing basic data structures such as stacksqueuessets,
and hash tables. These allow programs to easily exchange data between threads asynchronously.

Additionally, some non-blocking data structures are weak enough to be implemented without special atomic primitives. These exceptions include:
single-reader single-writer ring buffer FIFO
Read-copy-update with a single writer and any number of readers.
(The readers are wait-free; the writer is usually lock-free, until it needs to reclaim memory).
Read-copy-update with multiple writers and any number of readers.
(The readers are wait-free; multiple writers generally serialize with a lock and are not obstruction-free).

Several libraries internally use lock-free techniques,[4][5] but
it is difficult to write lock-free code that is correct.[6][7][8][9]


Wait-freedom[edit]

Wait-freedom is the strongest non-blocking guarantee of progress, combining guaranteed system-wide throughput with starvation-freedom.
An algorithm is wait-free if every operation has a bound on the number of steps the algorithm will take before the operation completes. This property is critical for real-time systems and is always nice to have as long as the performance cost is not too high.

It was shown in the 1980s[10] that
all algorithms can be implemented wait-free, and many transformations from serial code, called universal constructions, have been demonstrated. However, the resulting performance does not in general match even naïve blocking designs. Several papers
have since improved the performance of universal constructions, but still, their performance is far below blocking designs.

Several papers have investigated the difficulty of creating wait-free algorithms. For example, it has been shown[11] that
the widely available atomic conditional primitives, CAS and LL/SC,
cannot provide starvation-free implementations of many common data structures without memory costs growing linearly in the number of threads.

But in practice these lower bounds do not present a real barrier as spending a cache line or exclusive reservation granule (up to 2kb on ARM) of store per thread in the shared memory is not considered too costly for practical systems (typically the amount of
store logically required is a word, but physically CAS operations on the same cache line will collide, and LL/SC operations in the same exclusive reservation granule will collide, so the amount of store physically required[citation
needed] is greater).

Until 2011, wait-free algorithms were rare, both in research and in practice. However, in 2011 Kogan and Petrank[12] presented
a wait-free queue building on the CAS primitive, generally available on common hardware.
Their construction expands the lock-free queue of Michael and Scott,[13] which
is an efficient queue often used in practice. A follow-up paper by Kogan and Petrank[14] provided
a methodology for making wait-free algorithms fast and used this methodology to make the wait-free queue practically as fast as its lock-free counterpart.


Lock-freedom[edit]

Lock-freedom allows individual threads to starve but guarantees system-wide throughput. An algorithm is lock-free if it satisfies that when the program threads are run sufficiently long at least one of the threads makes progress (for some sensible definition
of progress). All wait-free algorithms are lock-free.

In general, a lock-free algorithm can run in four phases: completing one's own operation, assisting an obstructing operation, aborting an obstructing operation, and waiting. Completing one's own operation is complicated by the possibility of concurrent assistance
and abortion, but is invariably the fastest path to completion.

The decision about when to assist, abort or wait when an obstruction is met is the responsibility of a contention manager. This may be very simple (assist higher priority operations, abort lower priority ones), or may be more optimized to achieve better
throughput, or lower the latency of prioritized operations.

Correct concurrent assistance is typically the most complex part of a lock-free algorithm, and often very costly to execute: not only does the assisting thread slow down, but thanks to the mechanics of shared memory, the thread being assisted will be slowed,
too, if it is still running.


Obstruction-freedom[edit]

Obstruction-freedom is possibly the weakest natural non-blocking progress guarantee. An algorithm is obstruction-free if at any point, a single thread executed in isolation (i.e., with all obstructing threads suspended) for a bounded number of steps will complete
its operation. All lock-free algorithms are obstruction-free.

Obstruction-freedom demands only that any partially completed operation can be aborted and the changes made rolled back. Dropping concurrent assistance can often result in much simpler algorithms that are easier to validate. Preventing the system from continually live-locking is
the task of a contention manager.

Obstruction-freedom is also called optimistic concurrency control.

Some obstruction-free algorithms use a pair of "consistency markers" in the data structure. Processes reading the data structure first read one consistency marker, then read the relevant data into an internal buffer, then read the other marker, and then compare
the markers. The data is consistent if the two markers are identical. Markers may be non-identical when the read is interrupted by another process updating the data structure. In such a case, the process discards the data in the internal buffer and tries again.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: