History log of /dragonfly/sys/dev/disk/nvme/nvme_ns.h (Results 1 – 1 of 1)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v6.2.1, v6.2.0, v6.3.0, v6.0.1, v6.0.0, v6.0.0rc1, v6.1.0, v5.8.3, v5.8.2, v5.8.1, v5.8.0, v5.9.0, v5.8.0rc1, v5.6.3, v5.6.2, v5.6.1, v5.6.0, v5.6.0rc1, v5.7.0, v5.4.3, v5.4.2, v5.4.1, v5.4.0, v5.5.0, v5.4.0rc1, v5.2.2, v5.2.1, v5.2.0, v5.3.0, v5.2.0rc, v5.0.2, v5.0.1, v5.0.0, v5.0.0rc2, v5.1.0, v5.0.0rc1, v4.8.1, v4.8.0, v4.6.2, v4.9.0, v4.8.0rc, v4.6.1, v4.6.0, v4.6.0rc2, v4.6.0rc, v4.7.0
# 97a077a0 05-Jun-2016 Matthew Dillon <dillon@apollo.backplane.com>

kernel - Initial native DragonFly NVME driver commit

* Initial from-scratch NVME implementation using the NVM Express 1.2a
chipset specification pdf. Nothing ported from anywhere else.

Basic i

kernel - Initial native DragonFly NVME driver commit

* Initial from-scratch NVME implementation using the NVM Express 1.2a
chipset specification pdf. Nothing ported from anywhere else.

Basic implementation.

* Not yet connected to the build, interrupts are not yet functional
(it currently just polls at 100hz for testing), some additional error
handling is needed, and we will need ioctl support and a userland utility
to do various administrative tasks like formatting.

* Near full header spec keyed in including the bits we don't use (yet).

* Full SMP BIO interface and four different queue topologies implemented
depending on how many queues the chipset lets us create. The best is
ncpus * 4 queues, i.e. (low, high priority) x (read, write) per cpu.
The second best is just (low, high priority) x (read, write) shared between
all cpus.

Extremely low BIO overhead. Full strategy support and beginnings of
optimizations for low-latency I/Os (currently a hack).

* Initial testing with multiple concurrent sequential dd's on a little
samsung nvme mini-pcie card:

1.2 GBytes/sec 16KB
2.0 GBytes/sec 32KB
2.5 GBytes/sec 64KB

show more ...