Update README.md

This commit is contained in:
Andrew J. Gillis 2020-07-21 13:26:02 -07:00 committed by GitHub
parent 50fa758af8
commit 07291166fe
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23

View file

@ -5,7 +5,7 @@
[![codecov](https://codecov.io/gh/gammazero/deque/branch/master/graph/badge.svg)](https://codecov.io/gh/gammazero/deque)
[![License](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE)
Extremely fast ring-buffer deque ([double-ended queue](https://en.wikipedia.org/wiki/Double-ended_queue)) implementation.
Fast ring-buffer deque ([double-ended queue](https://en.wikipedia.org/wiki/Double-ended_queue)) implementation.
[![GoDoc](https://godoc.org/github.com/gammazero/deque?status.svg)](https://godoc.org/github.com/gammazero/deque)
@ -25,7 +25,7 @@ Deque generalizes a queue and a stack, to efficiently add and remove items at ei
This deque implementation is optimized for CPU and GC performance. The circular buffer automatically re-sizes by powers of two, growing when additional capacity is needed and shrinking when only a quarter of the capacity is used, and uses bitwise arithmetic for all calculations. Since growth is by powers of two, adding elements will only cause O(log n) allocations.
The ring-buffer implementation significantly improves memory and time performance with fewer GC pauses, compared to implementations based on slices and linked lists. By wrapping around the buffer, previously used space is reused, making allocation unnecessary until all buffer capacity is used.
The ring-buffer implementation improves memory and time performance with fewer GC pauses, compared to implementations based on slices and linked lists. By wrapping around the buffer, previously used space is reused, making allocation unnecessary until all buffer capacity is used. This is particularly efficient when data going into the dequeue is relatively balanced against data coming out. However, if size changes are very large and only fill and then empty then deque, the ring structure offers little benefit for memory reuse. For that usage pattern a different implementation may be preferable.
For maximum speed, this deque implementation leaves concurrency safety up to the application to provide, however the application chooses, if needed at all.