Revision tags: vendor/llvm-project/llvmorg-18.1.5-0-g617a15a9eac9, vendor/NetBSD/bmake/20240430, vendor/libcbor/0.11.0, vendor/llvm-project/llvmorg-18.1.4-0-ge6c3289804a6, vendor/device-tree/6.8, vendor/device-tree/6.7, vendor/llvm-project/llvmorg-18.1.3-0-gc13b7485b879 |
|
#
1931b75e |
| 23-Mar-2024 |
John Baldwin <jhb@FreeBSD.org> |
nvme: Export constants for min and max queue sizes
These are useful for NVMe over Fabrics.
Reviewed by: imp Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D
nvme: Export constants for min and max queue sizes
These are useful for NVMe over Fabrics.
Reviewed by: imp Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D44441
show more ...
|
Revision tags: vendor/device-tree/6.5, vendor/openssh/9.7p1, vendor/unbound/1.19.3, vendor/NetBSD/bmake/20240309, vendor/sqlite3/sqlite-3450100, vendor/llvm-project/llvmorg-18.1.1-0-gdba2a75e9c7e, vendor/got/diff/2023-09-15, release/13.3.0, vendor/libucl/20240206, vendor/xz/5.6.0, vendor/llvm-project/llvmorg-18.1.0-rc3-0-g6c90f8dd5463, vendor/llvm-project/llvmorg-18.1.0-rc2-53-gc7b0a6ecd442, vendor/arm-optimized-routines/v24.01, vendor/zlib/1.3.1, vendor/expat/2.6.0, vendor/unbound/1.19.1, vendor/tzcode/tzcode2024a, vendor/llvm-project/llvmorg-18.1.0-rc2-0-gc6c86965d967, vendor/tzdata/tzdata2024a, vendor/sendmail/8.18.1, vendor/acpica/20230628, vendor/acpica/20230331, vendor/llvm-project/llvmorg-18-init-18361-g22683463740e, vendor/libcxxrt/2024-01-25-fd484be8d1e94a1fcf6bc5c67e5c07b65ada19b6, vendor/llvm-project/llvmorg-18-init-18359-g93248729cfae, vendor/sqlite3/sqlite-3450000, vendor/NetBSD/bmake/20240108, vendor/llvm-project/llvmorg-18-init-16864-g3b3ee1f53424, vendor/llvm-project/llvmorg-18-init-16595-g7c00a5be5cde, vendor/llvm-project/llvmorg-18-init-16003-gfc5f51cf5af4, vendor/bc/6.7.4, vendor/ena-com/2.7.0, vendor/llvm-project/llvmorg-18-init-15692-g007ed0dccd6a, vendor/tzdata/tzdata2023d, vendor/openssh/9.6p1, vendor/llvm-project/llvmorg-18-init-15088-gd14ee76181fb, vendor/llvm-project/llvmorg-18-init-14265-ga17671084db1, vendor/llvm-project/llvmorg-17.0.6-0-g6009708b4367, vendor/xz/5.4.5, vendor/llvm-project/llvmorg-17.0.5-0-g98bfdac5ce82, vendor/unbound/1.19.0, vendor/sqlite3/sqlite-3440000, release/14.0.0 |
|
#
8d6c0743 |
| 06-Nov-2023 |
Alexander Motin <mav@FreeBSD.org> |
nvme: Introduce longer timeouts for admin queue
KIOXIA CD8 SSDs routinely take ~25 seconds to delete non-empty namespace. In some cases like hot-plug it takes longer, triggering timeout and control
nvme: Introduce longer timeouts for admin queue
KIOXIA CD8 SSDs routinely take ~25 seconds to delete non-empty namespace. In some cases like hot-plug it takes longer, triggering timeout and controller resets after just 30 seconds. Linux for many years has separate 60 seconds timeout for admin queue. This patch does the same. And it is good to be consistent.
Sponsored by: iXsystems, Inc. Reviewed by: imp MFC after: 1 week Differential Revision: https://reviews.freebsd.org/D42454
show more ...
|
Revision tags: vendor/bc/6.7.2, vendor/llvm-project/llvmorg-17.0.3-0-g888437e1b600 |
|
#
9cd7b624 |
| 10-Oct-2023 |
Warner Losh <imp@FreeBSD.org> |
nvme: Eliminate RECOVERY_FAILED state
While it seemed like a good idea to have this state, we can do everything we wanted with the state by checking ctrlr->is_failed since that's set before we start
nvme: Eliminate RECOVERY_FAILED state
While it seemed like a good idea to have this state, we can do everything we wanted with the state by checking ctrlr->is_failed since that's set before we start failing the qpairs. Add some comments about racing when we're failing the controller, though in practice I'm not sure that kind of race could even be lost.
Sponsored by: Netflix Reviewed by: chuck, gallatin, jhb Differential Revision: https://reviews.freebsd.org/D42051
show more ...
|
#
bc85cd30 |
| 10-Oct-2023 |
Warner Losh <imp@FreeBSD.org> |
nvme: gc nvme_ctrlr_post_failed_request and related task stuff
In 4b977e6dda92 we removed the call to nvme_ctrlr_post_failed_request because we can now directly fail requests in this context since w
nvme: gc nvme_ctrlr_post_failed_request and related task stuff
In 4b977e6dda92 we removed the call to nvme_ctrlr_post_failed_request because we can now directly fail requests in this context since we're in the reset task already. No need to queue it. I left it in place against future need, but it's been two years and no panics have resulted. Since the static analysis (code checking) and the dyanmic analysis (surviving in the field for 2 years, including at $WORK where we know we've gone through this path when we've failed drives) both signal that it's not really needed, go ahead and GC it. If we discover at a later date a flaw in this analysis, we can add it back easily enough by reverting this and 4b977e6dda92.
Sponsored by: Netflix Reviewed by: chuck, gallatin, jhb Differential Revision: https://reviews.freebsd.org/D42048
show more ...
|
Revision tags: vendor/bsddialog/1.0, vendor/llvm-project/llvmorg-17.0.2-0-gb2417f51dbbd, vendor/openssh/9.5p1, vendor/llvm-project/llvmorg-17.0.1-25-g098e653a5bed, vendor/nvi/2.2.1 |
|
#
da8324a9 |
| 24-Sep-2023 |
Warner Losh <imp@FreeBSD.org> |
nvme: Fix locking protocol violation to fix suspend / resume
Currently, when we suspend, we need to tear down all the qpairs. We call nvme_admin_qpair_abort_aers with the admin qpair lock held, but
nvme: Fix locking protocol violation to fix suspend / resume
Currently, when we suspend, we need to tear down all the qpairs. We call nvme_admin_qpair_abort_aers with the admin qpair lock held, but the tracker it will call for the pending AER also locks it (recursively) hitting an assert. This routine is called without the qpair lock held when we destroy the device entirely in a number of places. Add an assert to this effect and drop the qpair lock before calling it. nvme_admin_qpair_abort_aers then locks the qpair lock to traverse the list, dropping it around calls to nvme_qpair_complete_tracker, and restarting the list scan after picking it back up.
Note: If interrupts are still running, there's a tiny window for these AERs: If one fires just an instant after we manually complete it, then we'll be fine: we set the state of the queue to 'waiting' and we ignore interrupts while 'waiting'. We know we'll destroy all the queue state with these pending interrupts before looking at them again and we know all the TRs will have been completed or rescheduled. So either way we're covered.
Also, tidy up the failure case as well: failing a queue is a superset of disabling it, so no need to call disable first. This solves solves some locking issues with recursion since we don't need to recurse.. Set the qpair state of failed queues to RECOVERY_FAILED and stop scheduling the watchdog. Assert we're not failed when we're enabling a qpair, since failure currently is one-way. Make failure a little less verbose.
Next, kill the pre/post reset stuff. It's completely bogus since we disable the qparis, we don't need to also hold the lock through the reset: disabling will cause the ISR to return early. This keeps us from recursing on the recovery lock when resuming. We only need the recovery lock to avoid a specific race between the timer and the ISR.
Finally, kill NVME_RESET_2X. It'S been a major release since we put it in and nobody has used it as far as I can tell. And it was a motivator for the pre/post uglification.
These are all interrelated, so need to be done at the same time.
Sponsored by: Netflix Reviewed by: jhb Tested by: jhb (made sure suspend / resume worked) MFC After: 3 days Differential Revision: https://reviews.freebsd.org/D41866
show more ...
|
Revision tags: vendor/openssl/3.0.11, vendor/sqlite3/sqlite-3430100, vendor/unbound/1.18.0, vendor/NetBSD/bmake/20230909, vendor/openssl/1.1.1w, vendor/llvm-project/llvmorg-17.0.0-rc4-10-g0176e8729ea4, vendor/file/5.45, vendor/llvm-project/llvmorg-17.0.0-rc3-79-ga612cb0b81d8, vendor/krb5/1.21.2 |
|
#
8052b01e |
| 25-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
nvme: Add exclusion for ISR
Add a basically uncontended spinlock that we take out while the ISR is running. This has two effects: First, when we get a timeout, we can safely call the nvme_qpair_proc
nvme: Add exclusion for ISR
Add a basically uncontended spinlock that we take out while the ISR is running. This has two effects: First, when we get a timeout, we can safely call the nvme_qpair_process_completions w/o racing any ISRs. Second, we can use it to ensure that we don't reset the card while the ISRs are active (right now we just sleep and hope for the best, which usually is fine, but not always).
Sponsored by: Netflix MFC After: 2 weeks Reviewed by: chuck, gallatin Differential Revision: https://reviews.freebsd.org/D41452
show more ...
|
#
d4959bfc |
| 25-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
nvme: Greatly improve error recovery
Next phase of error recovery: Eliminate the REOVERY_START phase, since we don't need to wait to start recovery. Eliminate the RECOVERY_RESET phase since it is tr
nvme: Greatly improve error recovery
Next phase of error recovery: Eliminate the REOVERY_START phase, since we don't need to wait to start recovery. Eliminate the RECOVERY_RESET phase since it is transient, we now transition from RECOVERY_NORMAL into RECOVERY_WAITING.
In normal mode, read the status of the controller. If it is in failed state, or appears to be hot-plugged, jump directly to reset which will sort out the proper things to do. This will cause all pending I/O to complete with an abort status before the reset.
When in the NORMAL state, call the interrupt handler. This will complete all pending transactions when interrupts are broken or temporarily misbehaving. We then check all the pending completions for timeouts. If we have abort enabled, then we'll send an abort. Otherwise we'll assume the controller is wedged and needs a reset. By calling the interrupt handler here, we'll avoid an issue with the current code where we transitioned to RECOVERY_START which prevented any completions from happening. Now completions happen. In addition and follow-on I/O that is scheduled in the completion routines will be submitted, rather than queued, because the recovery state is correct. This also fixes a problem where I/O would timeout, but never complete, leading to hung I/O.
Resetting remains the same as before, just when we chose to reset has changed.
A nice side effect of these changes is that we now do I/O when interrupts to the card are totally broken. Followon commits will improve the error reporting and logging when this happens. Performance will be aweful, but will at least be minimally functional.
There is a small race when we're checking the completions if interrupts are working, but this is handled in a future commit.
Sponsored by: Netflix MFC After: 2 weeks Differential Revision: https://reviews.freebsd.org/D36922
show more ...
|
Revision tags: vendor/unifdef/2.12, vendor/unifdef/2.11, 2023.08.19-b34f66deb02e188104, vendor/zlib/1.3 |
|
#
95ee2897 |
| 16-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
sys: Remove $FreeBSD$: two-line .h pattern
Remove /^\s*\*\n \*\s+\$FreeBSD\$$\n/
|
#
33469f10 |
| 14-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
nvme: use mtx_padaalign instead of mtx + alignment attribute
nvme driver predates, it seems, mtx_padalign. Modernize.
Sponsored by: Netflix
|
Revision tags: vendor/less/v643, vendor/NetBSD/libc-vis/20230813, vendor/openssh/9.4p1, vendor/device-tree/6.4, vendor/device-tree/6.3, vendor/device-tree/6.2, vendor/device-tree/6.1 |
|
#
09c20a29 |
| 08-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
nvme: Move bools to fill hole
The two bools in nvme_request create a 6 byte hole today. Move them to after retires to fill the 4 byte hole there and add a spare[2] to make nvme_request 8 bytes small
nvme: Move bools to fill hole
The two bools in nvme_request create a 6 byte hole today. Move them to after retires to fill the 4 byte hole there and add a spare[2] to make nvme_request 8 bytes smaller. spare[2] isn't strictly necessary, but documents how many bytes we have left in that hole, as the number of booleans will increase shortly.
Suggested by: chuck Sponsored by: Netflix
show more ...
|
#
7be0b068 |
| 07-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
nvme: Remove duplicate command printing routine
Both nvme_dump_command and nvme_qpair_print_command print nvme commands. The former latter better. Recode the one call to nvme_dump_command to use nvm
nvme: Remove duplicate command printing routine
Both nvme_dump_command and nvme_qpair_print_command print nvme commands. The former latter better. Recode the one call to nvme_dump_command to use nvme_qpair_print_command and delete the former. No sense having two nearly identical routines. A future commit will convert to sbuf.
Sponsored by: Netflix Reviewed by: chuck, mav, jhb Differential Revision: https://reviews.freebsd.org/D41309
show more ...
|
#
6f76d493 |
| 07-Aug-2023 |
Warner Losh <imp@FreeBSD.org> |
nvme: Remove duplicate completion printing routine
Both nvme_dump_completion and nvme_qpair_print_completion print completions. The latter is better. Recode the two instances of nvme_dump_completion
nvme: Remove duplicate completion printing routine
Both nvme_dump_completion and nvme_qpair_print_completion print completions. The latter is better. Recode the two instances of nvme_dump_completion to use nvme_qpair_print_completion and delete the former. No sense having two nearly identical routines. A future commit will convert this to sbuf.
Sponsored by: Netflix Reviewed by: chuck Differential Revision: https://reviews.freebsd.org/D41308
show more ...
|
Revision tags: vendor/krb5/1.21.1, vendor/xz/5.4.4, vendor/openssl/3.0.10, vendor/openssl/1.1.1v, vendor/llvm-project/llvmorg-17-init-19311-gbc849e525f80, vendor/llvm-project/llvmorg-17-init-19304-gd0b54bb50e51 |
|
#
92103adb |
| 24-Jul-2023 |
John Baldwin <jhb@FreeBSD.org> |
nvme: Use a memdesc for the request buffer instead of a bespoke union.
This avoids encoding CAM-specific knowledge in nvme_qpair.c.
Reviewed by: chuck, imp, markj Sponsored by: Chelsio Communicatio
nvme: Use a memdesc for the request buffer instead of a bespoke union.
This avoids encoding CAM-specific knowledge in nvme_qpair.c.
Reviewed by: chuck, imp, markj Sponsored by: Chelsio Communications Differential Revision: https://reviews.freebsd.org/D41119
show more ...
|
Revision tags: vendor/openssh/9.3p2, vendor/lua/5.4.6, vendor/NetBSD/bmake/20230622, vendor/openpam/XIMENIA, vendor/heimdal/7.8.0-2023-06-10-f62e2f278, vendor/openssl/3.0.9, vendor/llvm-project/llvmorg-16.0.6-0-g7cbf1a259152, vendor/ntp/4.2.8p17, vendor/llvm-project/llvmorg-16.0.5-0-g185b81e034ba, vendor/spleen/2.0.0, vendor/ntp/4.2.8p16, vendor/openssl/1.1.1u, vendor/sqlite3/sqlite-3420000, vendor/bc/6.6.0, vendor/llvm-project/llvmorg-16.0.4-0-gae42196bc493, vendor/NetBSD/bmake/20230510, vendor/xz/5.4.3 |
|
#
4d846d26 |
| 10-May-2023 |
Warner Losh <imp@FreeBSD.org> |
spdx: The BSD-2-Clause-FreeBSD identifier is obsolete, drop -FreeBSD
The SPDX folks have obsoleted the BSD-2-Clause-FreeBSD identifier. Catch up to that fact and revert to their recommended match of
spdx: The BSD-2-Clause-FreeBSD identifier is obsolete, drop -FreeBSD
The SPDX folks have obsoleted the BSD-2-Clause-FreeBSD identifier. Catch up to that fact and revert to their recommended match of BSD-2-Clause.
Discussed with: pfg MFC After: 3 days Sponsored by: Netflix
show more ...
|
Revision tags: vendor/tcpdump/4.99.4, vendor/llvm-project/llvmorg-16.0.3-0-gda3cd333bea5, vendor/ldns/1.8.3, vendor/spleen/1.9.3, vendor/libpcap/1.10.4, vendor/spleen/1.6.0, vendor/less/v632, vendor/bc/6.5.0, vendor/libfido2/1.13.0, vendor/libfido2/1.12.0, vendor/libfido2/1.11.0, vendor/libfido2/1.10.0, vendor/libfido2/1.9.0, vendor/NetBSD/bmake/20230414, vendor/llvm-project/llvmorg-16.0.2-0-g18ddebe1a1a9, vendor/libcbor/0.10.2, vendor/tzcode/tzcode2023c, vendor/tzcode/tzcode2023b, vendor/tzcode/tzcode2023a, vendor/sqlite3/sqlite-3410200, vendor/llvm-project/llvmorg-16.0.1-0-gcd89023f7979, release/13.2.0, vendor/llvm-project/llvmorg-16.0.0-45-g42d1b276f779, vendor/llvm-project/llvmorg-16.0.0-0-g08d094a0e457, vendor/tzdata/tzdata2023c, vendor/libpcap/1.10.3, vendor/opencsd/v1.4.0, vendor/arm-optimized-routines/v23.01, vendor/tzdata/tzdata2023b, vendor/tzdata/tzdata2023a, vendor/xz/5.4.2, vendor/openssh/9.3p1, vendor/openssl/3.0.8, vendor/bc/6.4.0, vendor/sqlite3/sqlite-3410000, vendor/bc/6.3.1, vendor/bearssl/20230220, vendor/zlib/1.2.13, vendor/llvm-project/llvmorg-16.0.0-rc2-10-g073506d8c15c, vendor/llvm-project/llvmorg-16-init-18548-gb0daacf58f41, vendor/NetBSD/bmake/20230208, vendor/byacc/20230201, vendor/openssl/1.1.1t, vendor/NetBSD/libedit/2023-01-06, vendor/openssh/9.2p1, vendor/tcsh/6.24.07, vendor/bc/6.2.2, vendor/bc/6.2.1, vendor/bc/6.2.0, vendor/bc/6.1.0, vendor/bc/6.0.4, vendor/NetBSD/bmake/20230126, vendor/Juniper/libxo/1.6.0, vendor/zstd/1.5.2, vendor/xz/5.4.1, vendor/sendmail/8.17.1, vendor/llvm-project/llvmorg-15.0.7-0-g8dfdcc7b7bf6, vendor/heimdal/7.8.0, vendor/sqlite3/sqlite-3400100, vendor/xz/5.4.0, vendor/tzcode/tzcode2022g, vendor/tzcode/tzcode2022f, vendor/tzcode/tzcode2022e, vendor/tzcode/tzcode2022d, vendor/xz/5.2.9, vendor/llvm-project/llvmorg-15.0.6-0-g088f33605d8a, vendor/tzdata/tzdata2022g, release/12.4.0, vendor/sqlite3/sqlite-3400000, vendor/expat/2.5.0, vendor/xz/5.2.8, vendor/device-tree/6.0, vendor/device-tree/5.19, vendor/openssl/1.1.1s, vendor/wireguard-tools/v1.0.20210914, vendor/tzdata/tzdata2022f, vendor/acpica/20221020, vendor/unbound/1.17.0, vendor/llvm-project/llvmorg-15.0.2-10-gf3c5289e7846, vendor/llvm-project/llvmorg-15.0.2-0-g4bd3f3759259, vendor/llvm-project/llvmorg-15.0.1-0-gb73d2c8c720a, vendor/tzdata/tzdata2022e, vendor/openssh/9.1p1, vendor/unbound/1.16.3, vendor/bsddialog/0.4, vendor/tzdata/tzdata2022d, vendor/file/5.43, vendor/expat/2.4.9, vendor/sqlite3/sqlite-3390300, vendor/llvm-project/llvmorg-15.0.0-9-g1c73596d3454, vendor/llvm-project/llvmorg-15.0.0-0-g4ba6a9c9f65b, vendor/less/v608, vendor/bsddialog/0.3, vendor/lua/5.4.4, vendor/lua/5.4.3, vendor/sqlite3/sqlite-3390200, vendor/bc/6.0.2, verndor/bc/6.0.2, vendor/dhcpcd/9.4.1, vendor/tzcode/tzcode2022c, vendor/tzcode/unsplit, vendor/tzdata/tzdata2022c, vendor/llvm-project/llvmorg-15.0.0-rc2-40-gfbd2950d8d0d, vendor/tzdata/tzdata2022b, vendor/arm-optimized-routines/20220210-89ca9c3, vendor/device-tree/5.18, vendor/device-tree/5.17, vendor/device-tree/5.16, vendor/device-tree/5.15, vendor/device-tree/5.14, vendor/unbound/1.16.2, vendor/llvm-project/llvmorg-15-init-17826-g1f8ae9d7e7e4, vendor/llvm-project/llvmorg-15-init-17827-gd77882e66779, vendor/NetBSD/bmake/20220726, vendor/NetBSD/bmake/20220724, vendor/llvm-project/llvmorg-15-init-17485-ga3e38b4a206b, vendor/llvm-project/llvmorg-15-init-16436-g18a6ab5b8d1f, vendor/unbound/1.16.1, vendor/sqlite3/sqlite-3390000, vendor/openssl/1.1.1q, vendor/file/5.42, vendor/llvm-project/llvmorg-15-init-15358-g53dc0f107877, vendor/openssl/1.1.1p, vendor/bc/5.3.3, vendor/bc/5.3.2, vendor/llvm-project/llvmorg-14.0.5-0-gc12386ae247c, vendor/bc/5.3.1, vendor/bc/5.3.0, vendor/unbound/1.16.0, vendor/llvm-project/llvmorg-14.0.4-0-g29f1039a7285, vendor/sqlite3/sqlite-3380500, release/13.1.0, upstream/13.1.0, vendor/bc/5.2.5 |
|
#
1093caa1 |
| 06-May-2022 |
John Baldwin <jhb@FreeBSD.org> |
nvme: Remove unused devclass arguments to DRIVER_MODULE.
|
Revision tags: vendor/openssl/1.1.1o, vendor/llvm-project/llvmorg-14.0.2-0-g0e27d08cdeb3, vendor/llvm-project/llvmorg-14.0.3-0-g1f9140064dfb, vendor/NetBSD/bmake/20220418, vendor/bearssl/20220418, vendor/bc/5.2.4 |
|
#
3a468f20 |
| 15-Apr-2022 |
Warner Losh <imp@FreeBSD.org> |
nvme: Use saved mps when initializing drive
Make sure we set the MPS we cached (currently the drives minimum mps) in CC (Controller Configuration) when reinitializing the drive. It must match the pa
nvme: Use saved mps when initializing drive
Make sure we set the MPS we cached (currently the drives minimum mps) in CC (Controller Configuration) when reinitializing the drive. It must match the page_size that we're going to use. Also retire less specific NVME_PAGE_SHIFT since it's now unused.
Sponsored by: Netflix Reviewed by: chuck Differential Revision: https://reviews.freebsd.org/D34869
show more ...
|
#
55412ef9 |
| 15-Apr-2022 |
Warner Losh <imp@FreeBSD.org> |
nvme: Rename min_page_size to page_size and save mps
The Memory Page Size sets the basic unit of operation for the drive. We currently set this to the drive's minimum page size, but we could set it
nvme: Rename min_page_size to page_size and save mps
The Memory Page Size sets the basic unit of operation for the drive. We currently set this to the drive's minimum page size, but we could set it to any page size the drive supports in the future. Replace min_page_size (it's now unused for that purpose) with page_size to reflect this and cache the MPS we want to use. Use NVME_MPS_SHIFT to compute page_size.
Sponsored by: Netflix Reviewed by: chuck Differential Revision: https://reviews.freebsd.org/D34868
show more ...
|
Revision tags: vendor/NetBSD/libedit/2022-04-11, vendor/openssh/9.0p1, vendor/NetBSD/bmake/20220330, vendor/acpica/20220331, vendor/zlib/1.2.12 |
|
#
6af6a52e |
| 29-Mar-2022 |
Warner Losh <imp@FreeBSD.org> |
nvme: Save cap_lo and cap_hi
Save the capabilities for the drive.
Sponsored by: Netflix
|
#
a70b5660 |
| 29-Mar-2022 |
Warner Losh <imp@FreeBSD.org> |
nvme: MPS is a power of two, not a size / 8k
Setting MPS in the CC should be a power of 2 number (it specifies the page size of the host is 2^(12+MPS)), so adjust the calcuation. There is no functio
nvme: MPS is a power of two, not a size / 8k
Setting MPS in the CC should be a power of 2 number (it specifies the page size of the host is 2^(12+MPS)), so adjust the calcuation. There is no functional change because we do not support any architecutres != 4k pages (yet). Other changes are needed for architectures with 16k or 64k pages, especially when the underlying NVMe drive doesn't support that page size (Most drives support a range that's small, and many only support 4k), but let's at least do this calculation correctly. 12 - 12 is just as much 0 as 4096 >> 13 is :)
Sponsored by: Netflix Reviewed by: mav Differential Revision: https://reviews.freebsd.org/D34707
show more ...
|
Revision tags: vendor/llvm-project/llvmorg-14.0.0-2-g3f43d803382d, vendor/heimdal/7.7.0, vendor/expat/2.4.7, vendor/llvm-project/llvmorg-14.0.0-rc4-2-gadd3ab7f4c8a, vendor/tzdata/tzdata2022a, vendor/openssl/1.1.1n, vendor/bsddialog/0.2, vendor/libcxxrt/2022-03-09-fd484be8d1e94a1fcf6bc5c67e5c07b65ada19b6, vendor/bc/5.2.3, vendor/llvm-project/llvmorg-14.0.0-rc2-12-g09546e1b5103, vendor/expat/2.4.6, vendor/openssh/8.9p1, vendor/llvm-project/llvmorg-13.0.1-0-g75e33f71c2da, vendor/llvm-project/llvmorg-14.0.0-rc1-74-g4dc3cb8e3255, vendor/unbound/1.15.0, vendor/NetBSD/bmake/20220208, vendor/bc/5.2.2, vendor/NetBSD/bmake/20220204, vendor/llvm-project/llvmorg-14-init-18315-g190be5457c90, vendor/llvm-project/llvmorg-14-init-18294-gdb01b123d012, vendor/terminus/terminus-font-4.49.1, vendor/bsddialog/0.1, vendor/llvm-project/llvmorg-14-init-17616-g024a1fab5c35, vendor/dma/2022-01-27, vendor/ena-com/2.5.0, vendor/wpa/2.10, vendor/expat/2.4.3, vendor/sqlite3/sqlite-3370200, vendor/wpa/gb26f5c0fe, vendor/sqlite3/sqlite-3370100, vendor/file/5.41, vendor/llvm-project/llvmorg-14-init-13186-g0c553cc1af2e, vendor/bsddialog/0.0.2, vendor/NetBSD/bmake/20211212, vendor/openssl/1.1.1m, vendor/unbound/1.14.0, vendor/bsddialog/0.0.1 |
|
#
7cf8d63c |
| 06-Dec-2021 |
Warner Losh <imp@FreeBSD.org> |
nvme_ahci: Mark AHCI devices as such in the controller
Add a quirk to flag AHCI attachment to the controller. This is for any of the strategies for attaching nvme devices as children of the AHCI dev
nvme_ahci: Mark AHCI devices as such in the controller
Add a quirk to flag AHCI attachment to the controller. This is for any of the strategies for attaching nvme devices as children of the AHCI device for Intel's RAID devices. This also has a side effect of cleaning up resource allocation from failed nvme_attach calls now.
Sponsored by: Netflix Reviewed by: mav Differential Revision: https://reviews.freebsd.org/D33285
show more ...
|
#
053f8ed6 |
| 06-Dec-2021 |
Warner Losh <imp@FreeBSD.org> |
nvme: Move to a quirk for the Intel alignment data
Prior to NVMe 1.3, Intel produced a series of drives that had performance alignment data in the vendor specific space since no standard had been de
nvme: Move to a quirk for the Intel alignment data
Prior to NVMe 1.3, Intel produced a series of drives that had performance alignment data in the vendor specific space since no standard had been defined. Move testing the versions to a quick so the NVMe NS code doesn't know about PCI device info.
Sponsored by: Netflix Reviewed by: mav Differential Revision: https://reviews.freebsd.org/D33284
show more ...
|
Revision tags: vendor/unbound/1.14.0rc1, vendor/llvm-project/llvmorg-14-init-11187-g222442ec2d71, release/12.3.0, upstream/12.3.0, vendor/wpa/g14ab4a816, vendor/bc/5.2.1, vendor/bc/5.2.0, vendor/bsddialog/2021-11-24, vendor/llvm-project/llvmorg-14-init-10223-g401b76fdf2b3, vendor/llvm-project/llvmorg-14-init-10186-gff7f2cfa959b, vendor/mandoc/1.14.6, vendor/openssh/8.8p1, vendor/ck/2021029, vendor/tzdata/tzdata2021e, vendor/tzdata/tzdata2021d, vendor/bc/5.1.1, vendor/bc/5.1.0, vendor/tzdata/tzdata2021c, vendor/libfido2/1.8.0, vendor/libcbor/0.8.0 |
|
#
83581511 |
| 01-Oct-2021 |
Warner Losh <imp@FreeBSD.org> |
nvme: Use adaptive spinning when polling for completion or state change
We only use nvme_completion_poll in the initialization path. The commands they queue and wait for finish quickly as they invol
nvme: Use adaptive spinning when polling for completion or state change
We only use nvme_completion_poll in the initialization path. The commands they queue and wait for finish quickly as they involve no I/O to the drive's media. These command take about 20-200 microsecnds each. Set the wait time to 1us and then increase it by 1.5 each successive iteration (max 1ms). This reduces initialization time by 80ms in cpervica's tests.
Use this same technique waiting for RDY state transitions. This saves another 20ms. In total we're down from ~330ms to ~2ms.
Tested by: cperciva Sponsored by: Netflix Reviewed by: mav Differential Review: https://reviews.freebsd.org/D32259
show more ...
|
Revision tags: vendor/acpica/20210930 |
|
#
587aa255 |
| 29-Sep-2021 |
Warner Losh <imp@FreeBSD.org> |
nvme: count number of ignored interrupts
Count the number of times we're asked to process completions, but that we ignore because the state of the qpair isn't in RECOVERY_NONE.
Sponsored by: Netfl
nvme: count number of ignored interrupts
Count the number of times we're asked to process completions, but that we ignore because the state of the qpair isn't in RECOVERY_NONE.
Sponsored by: Netflix Reviewed by: mav, chuck Differential Revision: https://reviews.freebsd.org/D32212
show more ...
|
Revision tags: vendor/llvm-project/llvmorg-13.0.0-0-gd7b669b3a303, vendor/llvm-project/llvmorg-13.0.0-rc4-0-gd7b669b3a303, vendor/tzdata/tzdata2021b |
|
#
502dc84a |
| 23-Sep-2021 |
Warner Losh <imp@FreeBSD.org> |
nvme: Use shared timeout rather than timeout per transaction
Keep track of the approximate time commands are 'due' and the next deadline for a command. twice a second, wake up to see if any commands
nvme: Use shared timeout rather than timeout per transaction
Keep track of the approximate time commands are 'due' and the next deadline for a command. twice a second, wake up to see if any commands have entered timeout. If so, quiessce and then enter a recovery mode half the timeout further in the future to allow the ISR to complete. Once we exit recovery mode, we go back to operations as normal.
Sponsored by: Netflix Differential Revision: https://reviews.freebsd.org/D28583
show more ...
|
Revision tags: vendor/dma/2021-07-10, vendor/NetBSD/libedit/2021-09-10, vendor/bc/5.0.2, vendor/llvm-project/llvmorg-13.0.0-rc3-8-g08642a395f23, vendor/llvm-project/llvmorg-13.0.0-rc2-43-gf56129fe78d5, vendor/openssl/1.1.1l |
|
#
e3bdf3da |
| 31-Aug-2021 |
Alexander Motin <mav@FreeBSD.org> |
nvme(4): Add MSI and single MSI-X support.
If we can't allocate more MSI-X vectors, accept using single shared. If we can't allocate any MSI-X, try to allocate 2 MSI vectors, but accept single share
nvme(4): Add MSI and single MSI-X support.
If we can't allocate more MSI-X vectors, accept using single shared. If we can't allocate any MSI-X, try to allocate 2 MSI vectors, but accept single shared. If still no luck, fall back to shared INTx.
This provides maximal flexibility in some limited scenarios. For example, vmd(4) does not support INTx and can handle only limited number of MSI/MSI-X vectors without sharing.
MFC after: 1 week
show more ...
|