1# BadgerDB [![GoDoc](https://godoc.org/github.com/dgraph-io/badger?status.svg)](https://godoc.org/github.com/dgraph-io/badger) [![Go Report Card](https://goreportcard.com/badge/github.com/dgraph-io/badger)](https://goreportcard.com/report/github.com/dgraph-io/badger) [![Build Status](https://teamcity.dgraph.io/guestAuth/app/rest/builds/buildType:(id:Badger_UnitTests)/statusIcon.svg)](https://teamcity.dgraph.io/viewLog.html?buildTypeId=Badger_UnitTests&buildId=lastFinished&guest=1) ![Appveyor](https://ci.appveyor.com/api/projects/status/github/dgraph-io/badger?branch=master&svg=true) [![Coverage Status](https://coveralls.io/repos/github/dgraph-io/badger/badge.svg?branch=master)](https://coveralls.io/github/dgraph-io/badger?branch=master)
2
3![Badger mascot](images/diggy-shadow.png)
4
5BadgerDB is an embeddable, persistent, simple and fast key-value (KV) database
6written in pure Go. It's meant to be a performant alternative to non-Go-based
7key-value stores like [RocksDB](https://github.com/facebook/rocksdb).
8
9## Project Status
10Badger v1.0 was released in Nov 2017. Check the [Changelog] for the full details.
11
12[Changelog]:https://github.com/dgraph-io/badger/blob/master/CHANGELOG.md
13
14We introduced transactions in [v0.9.0] which involved a major API change. If you have a Badger
15datastore prior to that, please use [v0.8.1], but we strongly urge you to upgrade. Upgrading from
16both v0.8 and v0.9 will require you to [take backups](#database-backup) and restore using the new
17version.
18
19[v1.0.1]: //github.com/dgraph-io/badger/tree/v1.0.1
20[v0.8.1]: //github.com/dgraph-io/badger/tree/v0.8.1
21[v0.9.0]: //github.com/dgraph-io/badger/tree/v0.9.0
22
23## Table of Contents
24 * [Getting Started](#getting-started)
25    + [Installing](#installing)
26    + [Opening a database](#opening-a-database)
27    + [Transactions](#transactions)
28      - [Read-only transactions](#read-only-transactions)
29      - [Read-write transactions](#read-write-transactions)
30      - [Managing transactions manually](#managing-transactions-manually)
31    + [Using key/value pairs](#using-keyvalue-pairs)
32    + [Monotonically increasing integers](#monotonically-increasing-integers)
33    * [Merge Operations](#merge-operations)
34    + [Setting Time To Live(TTL) and User Metadata on Keys](#setting-time-to-livettl-and-user-metadata-on-keys)
35    + [Iterating over keys](#iterating-over-keys)
36      - [Prefix scans](#prefix-scans)
37      - [Key-only iteration](#key-only-iteration)
38    + [Garbage Collection](#garbage-collection)
39    + [Database backup](#database-backup)
40    + [Memory usage](#memory-usage)
41    + [Statistics](#statistics)
42  * [Resources](#resources)
43    + [Blog Posts](#blog-posts)
44  * [Contact](#contact)
45  * [Design](#design)
46    + [Comparisons](#comparisons)
47    + [Benchmarks](#benchmarks)
48  * [Other Projects Using Badger](#other-projects-using-badger)
49  * [Frequently Asked Questions](#frequently-asked-questions)
50
51## Getting Started
52
53### Installing
54To start using Badger, install Go 1.8 or above and run `go get`:
55
56```sh
57$ go get github.com/dgraph-io/badger/...
58```
59
60This will retrieve the library and install the `badger_info` command line
61utility into your `$GOBIN` path.
62
63
64### Opening a database
65The top-level object in Badger is a `DB`. It represents multiple files on disk
66in specific directories, which contain the data for a single database.
67
68To open your database, use the `badger.Open()` function, with the appropriate
69options. The `Dir` and `ValueDir` options are mandatory and must be
70specified by the client. They can be set to the same value to simplify things.
71
72```go
73package main
74
75import (
76	"log"
77
78	"github.com/dgraph-io/badger"
79)
80
81func main() {
82  // Open the Badger database located in the /tmp/badger directory.
83  // It will be created if it doesn't exist.
84  opts := badger.DefaultOptions
85  opts.Dir = "/tmp/badger"
86  opts.ValueDir = "/tmp/badger"
87  db, err := badger.Open(opts)
88  if err != nil {
89	  log.Fatal(err)
90  }
91  defer db.Close()
92  // Your code here…
93}
94```
95
96Please note that Badger obtains a lock on the directories so multiple processes
97cannot open the same database at the same time.
98
99### Transactions
100
101#### Read-only transactions
102To start a read-only transaction, you can use the `DB.View()` method:
103
104```go
105err := db.View(func(txn *badger.Txn) error {
106  // Your code here…
107  return nil
108})
109```
110
111You cannot perform any writes or deletes within this transaction. Badger
112ensures that you get a consistent view of the database within this closure. Any
113writes that happen elsewhere after the transaction has started, will not be
114seen by calls made within the closure.
115
116#### Read-write transactions
117To start a read-write transaction, you can use the `DB.Update()` method:
118
119```go
120err := db.Update(func(txn *badger.Txn) error {
121  // Your code here…
122  return nil
123})
124```
125
126All database operations are allowed inside a read-write transaction.
127
128Always check the returned error value. If you return an error
129within your closure it will be passed through.
130
131An `ErrConflict` error will be reported in case of a conflict. Depending on the state
132of your application, you have the option to retry the operation if you receive
133this error.
134
135An `ErrTxnTooBig` will be reported in case the number of pending writes/deletes in
136the transaction exceed a certain limit. In that case, it is best to commit the
137transaction and start a new transaction immediately. Here is an example (we are
138not checking for errors in some places for simplicity):
139
140```go
141updates := make(map[string]string)
142txn := db.NewTransaction(true)
143for k,v := range updates {
144  if err := txn.Set([]byte(k),[]byte(v)); err == ErrTxnTooBig {
145    _ = txn.Commit()
146    txn = db.NewTransaction(..)
147    _ = txn.Set([]byte(k),[]byte(v))
148  }
149}
150_ = txn.Commit()
151```
152
153#### Managing transactions manually
154The `DB.View()` and `DB.Update()` methods are wrappers around the
155`DB.NewTransaction()` and `Txn.Commit()` methods (or `Txn.Discard()` in case of
156read-only transactions). These helper methods will start the transaction,
157execute a function, and then safely discard your transaction if an error is
158returned. This is the recommended way to use Badger transactions.
159
160However, sometimes you may want to manually create and commit your
161transactions. You can use the `DB.NewTransaction()` function directly, which
162takes in a boolean argument to specify whether a read-write transaction is
163required. For read-write transactions, it is necessary to call `Txn.Commit()`
164to ensure the transaction is committed. For read-only transactions, calling
165`Txn.Discard()` is sufficient. `Txn.Commit()` also calls `Txn.Discard()`
166internally to cleanup the transaction, so just calling `Txn.Commit()` is
167sufficient for read-write transaction. However, if your code doesn’t call
168`Txn.Commit()` for some reason (for e.g it returns prematurely with an error),
169then please make sure you call `Txn.Discard()` in a `defer` block. Refer to the
170code below.
171
172```go
173// Start a writable transaction.
174txn, err := db.NewTransaction(true)
175if err != nil {
176    return err
177}
178defer txn.Discard()
179
180// Use the transaction...
181err := txn.Set([]byte("answer"), []byte("42"))
182if err != nil {
183    return err
184}
185
186// Commit the transaction and check for error.
187if err := txn.Commit(nil); err != nil {
188    return err
189}
190```
191
192The first argument to `DB.NewTransaction()` is a boolean stating if the transaction
193should be writable.
194
195Badger allows an optional callback to the `Txn.Commit()` method. Normally, the
196callback can be set to `nil`, and the method will return after all the writes
197have succeeded. However, if this callback is provided, the `Txn.Commit()`
198method returns as soon as it has checked for any conflicts. The actual writing
199to the disk happens asynchronously, and the callback is invoked once the
200writing has finished, or an error has occurred. This can improve the throughput
201of the application in some cases. But it also means that a transaction is not
202durable until the callback has been invoked with a `nil` error value.
203
204### Using key/value pairs
205To save a key/value pair, use the `Txn.Set()` method:
206
207```go
208err := db.Update(func(txn *badger.Txn) error {
209  err := txn.Set([]byte("answer"), []byte("42"))
210  return err
211})
212```
213
214This will set the value of the `"answer"` key to `"42"`. To retrieve this
215value, we can use the `Txn.Get()` method:
216
217```go
218err := db.View(func(txn *badger.Txn) error {
219  item, err := txn.Get([]byte("answer"))
220  if err != nil {
221    return err
222  }
223  val, err := item.Value()
224  if err != nil {
225    return err
226  }
227  fmt.Printf("The answer is: %s\n", val)
228  return nil
229})
230```
231
232`Txn.Get()` returns `ErrKeyNotFound` if the value is not found.
233
234Please note that values returned from `Get()` are only valid while the
235transaction is open. If you need to use a value outside of the transaction
236then you must use `copy()` to copy it to another byte slice.
237
238Use the `Txn.Delete()` method to delete a key.
239
240### Monotonically increasing integers
241
242To get unique monotonically increasing integers with strong durability, you can
243use the `DB.GetSequence` method. This method returns a `Sequence` object, which
244is thread-safe and can be used concurrently via various goroutines.
245
246Badger would lease a range of integers to hand out from memory, with the
247bandwidth provided to `DB.GetSequence`. The frequency at which disk writes are
248done is determined by this lease bandwidth and the frequency of `Next`
249invocations. Setting a bandwith too low would do more disk writes, setting it
250too high would result in wasted integers if Badger is closed or crashes.
251To avoid wasted integers, call `Release` before closing Badger.
252
253```go
254seq, err := db.GetSequence(key, 1000)
255defer seq.Release()
256for {
257  num, err := seq.Next()
258}
259```
260
261### Merge Operations
262Badger provides support for unordered merge operations. You can define a func
263of type `MergeFunc` which takes in an existing value, and a value to be
264_merged_ with it. It returns a new value which is the result of the _merge_
265operation. All values are specified in byte arrays. For e.g., here is a merge
266function (`add`) which adds a `uint64` value to an existing `uint64` value.
267
268```Go
269uint64ToBytes(i uint64) []byte {
270  var buf [8]byte
271  binary.BigEndian.PutUint64(buf[:], i)
272  return buf[:]
273}
274
275func bytesToUint64(b []byte) uint64 {
276  return binary.BigEndian.Uint64(b)
277}
278
279// Merge function to add two uint64 numbers
280func add(existing, new []byte) []byte {
281  return uint64ToBytes(bytesToUint64(existing) + bytesToUint64(new))
282}
283```
284
285This function can then be passed to the `DB.GetMergeOperator()` method, along
286with a key, and a duration value. The duration specifies how often the merge
287function is run on values that have been added using the `MergeOperator.Add()`
288method.
289
290`MergeOperator.Get()` method can be used to retrieve the cumulative value of the key
291associated with the merge operation.
292
293```Go
294key := []byte("merge")
295m := db.GetMergeOperator(key, add, 200*time.Millisecond)
296defer m.Stop()
297
298m.Add(uint64ToBytes(1))
299m.Add(uint64ToBytes(2))
300m.Add(uint64ToBytes(3))
301
302res, err := m.Get() // res should have value 6 encoded
303fmt.Println(bytesToUint64(res))
304```
305
306### Setting Time To Live(TTL) and User Metadata on Keys
307Badger allows setting an optional Time to Live (TTL) value on keys. Once the TTL has
308elapsed, the key will no longer be retrievable and will be eligible for garbage
309collection. A TTL can be set as a `time.Duration` value using the `Txn.SetWithTTL()`
310API method.
311
312An optional user metadata value can be set on each key. A user metadata value
313is represented by a single byte. It can be used to set certain bits along
314with the key to aid in interpreting or decoding the key-value pair. User
315metadata can be set using the `Txn.SetWithMeta()` API method.
316
317`Txn.SetEntry()` can be used to set the key, value, user metatadata and TTL,
318all at once.
319
320### Iterating over keys
321To iterate over keys, we can use an `Iterator`, which can be obtained using the
322`Txn.NewIterator()` method. Iteration happens in byte-wise lexicographical sorting
323order.
324
325
326```go
327err := db.View(func(txn *badger.Txn) error {
328  opts := badger.DefaultIteratorOptions
329  opts.PrefetchSize = 10
330  it := txn.NewIterator(opts)
331  defer it.Close()
332  for it.Rewind(); it.Valid(); it.Next() {
333    item := it.Item()
334    k := item.Key()
335    v, err := item.Value()
336    if err != nil {
337      return err
338    }
339    fmt.Printf("key=%s, value=%s\n", k, v)
340  }
341  return nil
342})
343```
344
345The iterator allows you to move to a specific point in the list of keys and move
346forward or backward through the keys one at a time.
347
348By default, Badger prefetches the values of the next 100 items. You can adjust
349that with the `IteratorOptions.PrefetchSize` field. However, setting it to
350a value higher than GOMAXPROCS (which we recommend to be 128 or higher)
351shouldn’t give any additional benefits. You can also turn off the fetching of
352values altogether. See section below on key-only iteration.
353
354#### Prefix scans
355To iterate over a key prefix, you can combine `Seek()` and `ValidForPrefix()`:
356
357```go
358db.View(func(txn *badger.Txn) error {
359  it := txn.NewIterator(badger.DefaultIteratorOptions)
360  defer it.Close()
361  prefix := []byte("1234")
362  for it.Seek(prefix); it.ValidForPrefix(prefix); it.Next() {
363    item := it.Item()
364    k := item.Key()
365    v, err := item.Value()
366    if err != nil {
367      return err
368    }
369    fmt.Printf("key=%s, value=%s\n", k, v)
370  }
371  return nil
372})
373```
374
375#### Key-only iteration
376Badger supports a unique mode of iteration called _key-only_ iteration. It is
377several order of magnitudes faster than regular iteration, because it involves
378access to the LSM-tree only, which is usually resident entirely in RAM. To
379enable key-only iteration, you need to set the `IteratorOptions.PrefetchValues`
380field to `false`. This can also be used to do sparse reads for selected keys
381during an iteration, by calling `item.Value()` only when required.
382
383```go
384err := db.View(func(txn *badger.Txn) error {
385  opts := badger.DefaultIteratorOptions
386  opts.PrefetchValues = false
387  it := txn.NewIterator(opts)
388  defer it.Close()
389  for it.Rewind(); it.Valid(); it.Next() {
390    item := it.Item()
391    k := item.Key()
392    fmt.Printf("key=%s\n", k)
393  }
394  return nil
395})
396```
397
398### Garbage Collection
399Badger values need to be garbage collected, because of two reasons:
400
401* Badger keeps values separately from the LSM tree. This means that the compaction operations
402that clean up the LSM tree do not touch the values at all. Values need to be cleaned up
403separately.
404
405* Concurrent read/write transactions could leave behind multiple values for a single key, because they
406are stored with different versions. These could accumulate, and take up unneeded space beyond the
407time these older versions are needed.
408
409Badger relies on the client to perform garbage collection at a time of their choosing. It provides
410the following methods, which can be invoked at an appropriate time:
411
412* `DB.PurgeOlderVersions()`: Is no longer needed since v1.5.0. Badger's LSM tree automatically discards older/invalid versions of keys.
413* `DB.RunValueLogGC()`: This method is designed to do garbage collection while
414  Badger is online. Along with randomly picking a file, it uses statistics generated by the
415  LSM-tree compactions to pick files that are likely to lead to maximum space
416  reclamation.
417
418It is recommended that this method be called during periods of low activity in
419your system, or periodically. One call would only result in removal of at max
420one log file. As an optimization, you could also immediately re-run it whenever
421it returns nil error (indicating a successful value log GC).
422
423```go
424ticker := time.NewTicker(5 * time.Minute)
425defer ticker.Stop()
426for range ticker.C {
427again:
428  err := db.RunValueLogGC(0.7)
429  if err == nil {
430    goto again
431  }
432}
433```
434
435### Database backup
436There are two public API methods `DB.Backup()` and `DB.Load()` which can be
437used to do online backups and restores. Badger v0.9 provides a CLI tool
438`badger`, which can do offline backup/restore. Make sure you have `$GOPATH/bin`
439in your PATH to use this tool.
440
441The command below will create a version-agnostic backup of the database, to a
442file `badger.bak` in the current working directory
443
444```
445badger backup --dir <path/to/badgerdb>
446```
447
448To restore `badger.bak` in the current working directory to a new database:
449
450```
451badger restore --dir <path/to/badgerdb>
452```
453
454See `badger --help` for more details.
455
456If you have a Badger database that was created using v0.8 (or below), you can
457use the `badger_backup` tool provided in v0.8.1, and then restore it using the
458command above to upgrade your database to work with the latest version.
459
460```
461badger_backup --dir <path/to/badgerdb> --backup-file badger.bak
462```
463
464### Memory usage
465Badger's memory usage can be managed by tweaking several options available in
466the `Options` struct that is passed in when opening the database using
467`DB.Open`.
468
469- `Options.ValueLogLoadingMode` can be set to `options.FileIO` (instead of the
470  default `options.MemoryMap`) to avoid memory-mapping log files. This can be
471  useful in environments with low RAM.
472- Number of memtables (`Options.NumMemtables`)
473  - If you modify `Options.NumMemtables`, also adjust `Options.NumLevelZeroTables` and
474   `Options.NumLevelZeroTablesStall` accordingly.
475- Number of concurrent compactions (`Options.NumCompactors`)
476- Mode in which LSM tree is loaded (`Options.TableLoadingMode`)
477- Size of table (`Options.MaxTableSize`)
478- Size of value log file (`Options.ValueLogFileSize`)
479
480If you want to decrease the memory usage of Badger instance, tweak these
481options (ideally one at a time) until you achieve the desired
482memory usage.
483
484### Statistics
485Badger records metrics using the [expvar] package, which is included in the Go
486standard library. All the metrics are documented in [y/metrics.go][metrics]
487file.
488
489`expvar` package adds a handler in to the default HTTP server (which has to be
490started explicitly), and serves up the metrics at the `/debug/vars` endpoint.
491These metrics can then be collected by a system like [Prometheus], to get
492better visibility into what Badger is doing.
493
494[expvar]: https://golang.org/pkg/expvar/
495[metrics]: https://github.com/dgraph-io/badger/blob/master/y/metrics.go
496[Prometheus]: https://prometheus.io/
497
498## Resources
499
500### Blog Posts
5011. [Introducing Badger: A fast key-value store written natively in
502Go](https://open.dgraph.io/post/badger/)
5032. [Make Badger crash resilient with ALICE](https://blog.dgraph.io/post/alice/)
5043. [Badger vs LMDB vs BoltDB: Benchmarking key-value databases in Go](https://blog.dgraph.io/post/badger-lmdb-boltdb/)
5054. [Concurrent ACID Transactions in Badger](https://blog.dgraph.io/post/badger-txn/)
506
507## Design
508Badger was written with these design goals in mind:
509
510- Write a key-value database in pure Go.
511- Use latest research to build the fastest KV database for data sets spanning terabytes.
512- Optimize for SSDs.
513
514Badger’s design is based on a paper titled _[WiscKey: Separating Keys from
515Values in SSD-conscious Storage][wisckey]_.
516
517[wisckey]: https://www.usenix.org/system/files/conference/fast16/fast16-papers-lu.pdf
518
519### Comparisons
520| Feature             | Badger                                       | RocksDB                       | BoltDB    |
521| -------             | ------                                       | -------                       | ------    |
522| Design              | LSM tree with value log                      | LSM tree only                 | B+ tree   |
523| High Read throughput | Yes                                          | No                           | Yes        |
524| High Write throughput | Yes                                          | Yes                           | No        |
525| Designed for SSDs   | Yes (with latest research <sup>1</sup>)      | Not specifically <sup>2</sup> | No        |
526| Embeddable          | Yes                                          | Yes                           | Yes       |
527| Sorted KV access    | Yes                                          | Yes                           | Yes       |
528| Pure Go (no Cgo)    | Yes                                          | No                            | Yes       |
529| Transactions        | Yes, ACID, concurrent with SSI<sup>3</sup> | Yes (but non-ACID)            | Yes, ACID |
530| Snapshots           | Yes                                           | Yes                           | Yes       |
531| TTL support         | Yes                                           | Yes                           | No       |
532
533<sup>1</sup> The [WISCKEY paper][wisckey] (on which Badger is based) saw big
534wins with separating values from keys, significantly reducing the write
535amplification compared to a typical LSM tree.
536
537<sup>2</sup> RocksDB is an SSD optimized version of LevelDB, which was designed specifically for rotating disks.
538As such RocksDB's design isn't aimed at SSDs.
539
540<sup>3</sup> SSI: Serializable Snapshot Isolation. For more details, see the blog post [Concurrent ACID Transactions in Badger](https://blog.dgraph.io/post/badger-txn/)
541
542### Benchmarks
543We have run comprehensive benchmarks against RocksDB, Bolt and LMDB. The
544benchmarking code, and the detailed logs for the benchmarks can be found in the
545[badger-bench] repo. More explanation, including graphs can be found the blog posts (linked
546above).
547
548[badger-bench]: https://github.com/dgraph-io/badger-bench
549
550## Other Projects Using Badger
551Below is a list of known projects that use Badger:
552
553* [0-stor](https://github.com/zero-os/0-stor) - Single device object store.
554* [Dgraph](https://github.com/dgraph-io/dgraph) - Distributed graph database.
555* [Sandglass](https://github.com/celrenheit/sandglass) - distributed, horizontally scalable, persistent, time sorted message queue.
556* [Usenet Express](https://usenetexpress.com/) - Serving over 300TB of data with Badger.
557* [go-ipfs](https://github.com/ipfs/go-ipfs) - Go client for the InterPlanetary File System (IPFS), a new hypermedia distribution protocol.
558* [gorush](https://github.com/appleboy/gorush) - A push notification server written in Go.
559* [emitter](https://github.com/emitter-io/emitter) - Scalable, low latency, distributed pub/sub broker with message storage, uses MQTT, gossip and badger.
560
561If you are using Badger in a project please send a pull request to add it to the list.
562
563## Frequently Asked Questions
564- **My writes are getting stuck. Why?**
565
566This can happen if a long running iteration with `Prefetch` is set to false, but
567a `Item::Value` call is made internally in the loop. That causes Badger to
568acquire read locks over the value log files to avoid value log GC removing the
569file from underneath. As a side effect, this also blocks a new value log GC
570file from being created, when the value log file boundary is hit.
571
572Please see Github issues [#293](https://github.com/dgraph-io/badger/issues/293)
573and [#315](https://github.com/dgraph-io/badger/issues/315).
574
575There are multiple workarounds during iteration:
576
5771. Use `Item::ValueCopy` instead of `Item::Value` when retrieving value.
5781. Set `Prefetch` to true. Badger would then copy over the value and release the
579   file lock immediately.
5801. When `Prefetch` is false, don't call `Item::Value` and do a pure key-only
581   iteration. This might be useful if you just want to delete a lot of keys.
5821. Do the writes in a separate transaction after the reads.
583
584- **My writes are really slow. Why?**
585
586Are you creating a new transaction for every single key update, and waiting for
587it to `Commit` fully before creating a new one? This will lead to very low
588throughput. To get best write performance, batch up multiple writes inside a
589transaction using single `DB.Update()` call. You could also have multiple such
590`DB.Update()` calls being made concurrently from multiple goroutines.
591
592The way to achieve the highest write throughput via Badger, is to do serial
593writes and use callbacks in `txn.Commit`, like so:
594
595```go
596che := make(chan error, 1)
597storeErr := func(err error) {
598  if err == nil {
599    return
600  }
601  select {
602    case che <- err:
603    default:
604  }
605}
606
607getErr := func() error {
608  select {
609    case err := <-che:
610      return err
611    default:
612      return nil
613  }
614}
615
616var wg sync.WaitGroup
617for _, kv := range kvs {
618  wg.Add(1)
619  txn := db.NewTransaction(true)
620  handle(txn.Set(kv.Key, kv.Value))
621  handle(txn.Commit(func(err error) {
622    storeErr(err)
623    wg.Done()
624  }))
625}
626wg.Wait()
627return getErr()
628```
629
630In this code, we passed a callback function to `txn.Commit`, which can pick up
631and return the first error encountered, if any. Callbacks can be made to do more
632things, like retrying commits etc.
633
634- **I don't see any disk write. Why?**
635
636If you're using Badger with `SyncWrites=false`, then your writes might not be written to value log
637and won't get synced to disk immediately. Writes to LSM tree are done inmemory first, before they
638get compacted to disk. The compaction would only happen once `MaxTableSize` has been reached. So, if
639you're doing a few writes and then checking, you might not see anything on disk. Once you `Close`
640the database, you'll see these writes on disk.
641
642- **Reverse iteration doesn't give me the right results.**
643
644Just like forward iteration goes to the first key which is equal or greater than the SEEK key, reverse iteration goes to the first key which is equal or lesser than the SEEK key. Therefore, SEEK key would not be part of the results. You can typically add a `0xff` byte as a suffix to the SEEK key to include it in the results. See the following issues: [#436](https://github.com/dgraph-io/badger/issues/436) and [#347](https://github.com/dgraph-io/badger/issues/347).
645
646- **Which instances should I use for Badger?**
647
648We recommend using instances which provide local SSD storage, without any limit
649on the maximum IOPS. In AWS, these are storage optimized instances like i3. They
650provide local SSDs which clock 100K IOPS over 4KB blocks easily.
651
652- **I'm getting a closed channel error. Why?**
653
654```
655panic: close of closed channel
656panic: send on closed channel
657```
658
659If you're seeing panics like above, this would be because you're operating on a closed DB. This can happen, if you call `Close()` before sending a write, or multiple times. You should ensure that you only call `Close()` once, and all your read/write operations finish before closing.
660
661- **Are there any Go specific settings that I should use?**
662
663We *highly* recommend setting a high number for GOMAXPROCS, which allows Go to
664observe the full IOPS throughput provided by modern SSDs. In Dgraph, we have set
665it to 128. For more details, [see this
666thread](https://groups.google.com/d/topic/golang-nuts/jPb_h3TvlKE/discussion).
667
668- **Are there any linux specific settings that I should use?**
669
670We recommend setting max file descriptors to a high number depending upon the expected size of you data.
671
672## Contact
673- Please use [discuss.dgraph.io](https://discuss.dgraph.io) for questions, feature requests and discussions.
674- Please use [Github issue tracker](https://github.com/dgraph-io/badger/issues) for filing bugs or feature requests.
675- Join [![Slack Status](http://slack.dgraph.io/badge.svg)](http://slack.dgraph.io).
676- Follow us on Twitter [@dgraphlabs](https://twitter.com/dgraphlabs).
677
678