1![diode][diode-logo] 2 3[![GoDoc][go-doc-badge]][go-doc] [![travis][travis-badge]][travis] 4 5Diodes are ring buffers manipulated via atomics. 6 7Diodes are optimized for high throughput scenarios where losing data is 8acceptable. Unlike a channel, a diode will overwrite data on writes in lieu 9of blocking. A diode does its best to not "push back" on the producer. 10In other words, invoking `Set()` on a diode never blocks. 11 12### Installation 13 14```bash 15go get code.cloudfoundry.org/go-diodes 16``` 17 18### Example: Basic Use 19 20```go 21d := diodes.NewOneToOne(1024, diodes.AlertFunc(func(missed int) { 22 log.Printf("Dropped %d messages", missed) 23})) 24 25// writer 26go func() { 27 for i := 0; i < 2048; i++ { 28 // Warning: Do not use i. By taking the address, 29 // you would not get each value 30 j := i 31 d.Set(diodes.GenericDataType(&j)) 32 } 33}() 34 35// reader 36poller := diodes.NewPoller(d) 37for { 38 i := poller.Next() 39 fmt.Println(*(*int)(i)) 40} 41``` 42 43### Example: Creating a Concrete Shell 44 45Diodes accept and return `diodes.GenericDataType`. It is recommended to not 46use these generic pointers directly. Rather, it is a much better experience to 47wrap the diode in a concrete shell that accepts the types your program works 48with and does the type casting for you. Here is an example of how to create a 49concrete shell for `[]byte`: 50 51```go 52type OneToOne struct { 53 d *diodes.Poller 54} 55 56func NewOneToOne(size int, alerter diodes.Alerter) *OneToOne { 57 return &OneToOne{ 58 d: diodes.NewPoller(diodes.NewOneToOne(size, alerter)), 59 } 60} 61 62func (d *OneToOne) Set(data []byte) { 63 d.d.Set(diodes.GenericDataType(&data)) 64} 65 66func (d *OneToOne) TryNext() ([]byte, bool) { 67 data, ok := d.d.TryNext() 68 if !ok { 69 return nil, ok 70 } 71 72 return *(*[]byte)(data), true 73} 74 75func (d *OneToOne) Next() []byte { 76 data := d.d.Next() 77 return *(*[]byte)(data) 78} 79``` 80 81Creating a concrete shell gives you the following advantages: 82 83- The compiler will tell you if you use a diode to read or write data of the 84 wrong type. 85- The type casting syntax in go is not common and should be hidden. 86- It prevents the generic pointer type from escaping in to client code. 87 88### Dropping Data 89 90The diode takes an `Alerter` as an argument to alert the user code to when 91the read noticed it missed data. It is important to note that the go-routine 92consuming from the diode is used to signal the alert. 93 94When the diode notices it has fallen behind, it will move the read index to 95the new write index and therefore drop more than a single message. 96 97There are two things to consider when choosing a diode: 98 991. Storage layer 1002. Access layer 101 102### Storage Layer 103 104##### OneToOne 105 106The OneToOne diode is meant to be used by one producing (invoking `Set()`) 107go-routine and a (different) consuming (invoking `TryNext()`) go-routine. It 108is not thread safe for multiple readers or writers. 109 110##### ManyToOne 111 112The ManyToOne diode is optimized for many producing (invoking `Set()`) 113go-routines and a single consuming (invoking `TryNext()`) go-routine. It is 114not thread safe for multiple readers. 115 116It is recommended to have a larger diode buffer size if the number of producers 117is high. This is to avoid the diode from having to mitigate write collisions 118(it will call its alert function if this occurs). 119 120### Access Layer 121 122##### Poller 123 124The Poller uses polling via `time.Sleep(...)` when `Next()` is invoked. While 125polling might seem sub-optimal, it allows the producer to be completely 126decoupled from the consumer. If you require very minimal push back on the 127producer, then the Poller is a better choice. However, if you require several 128diodes (e.g. one per connected client), then having several go-routines 129polling (sleeping) may be hard on the scheduler. 130 131##### Waiter 132 133The Waiter uses a conditional mutex to manage when the reader is alerted 134of new data. While this method is great for the scheduler, it does have 135extra overhead for the producer. Therefore, it is better suited for situations 136where you have several diodes and can afford slightly slower producers. 137 138### Benchmarks 139 140There are benchmarks that compare the various storage and access layers to 141channels. To run them: 142 143``` 144go test -bench=. -run=NoTest 145``` 146 147### Known Issues 148 149If a diode was to be written to `18446744073709551615+1` times it would overflow 150a `uint64`. This will cause problems if the size of the diode is not a power 151of two (`2^x`). If you write into a diode at the rate of one message every 152nanosecond, without restarting your process, it would take you 584.54 years to 153encounter this issue. 154 155[diode-logo]: https://raw.githubusercontent.com/cloudfoundry/go-diodes/gh-pages/diode-logo.png 156[go-doc-badge]: https://godoc.org/code.cloudfoundry.org/go-diodes?status.svg 157[go-doc]: https://godoc.org/code.cloudfoundry.org/go-diodes 158[travis-badge]: https://travis-ci.org/cloudfoundry/go-diodes.svg?branch=master 159[travis]: https://travis-ci.org/cloudfoundry/go-diodes?branch=master 160