- 10 Nov, 2016 2 commits
-
-
Christoph Hellwig authored
We only need the status and result fields, and passing them explicitly makes life a lot easier for the Fibre Channel transport which doesn't have a full CQE for the fast path case. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Keith Busch <keith.busch@intel.com> Signed-off-by:
Jens Axboe <axboe@fb.com>
-
Christoph Hellwig authored
This adds a shared per-request structure for all NVMe I/O. This structure is embedded as the first member in all NVMe transport drivers request private data and allows to implement common functionality between the drivers. The first use is to replace the current abuse of the SCSI command passthrough fields in struct request for the NVMe command passthrough, but it will grow a field more fields to allow implementing things like common abort handlers in the future. The passthrough commands are handled by having a pointer to the SQE (struct nvme_command) in struct nvme_request, and the union of the possible result fields, which had to be turned from an anonymous into a named union for that purpose. This avoids having to pass a reference to a full CQE around and thus makes checking the result a lot more lightweight. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Keith Busch <keith.busch@intel.com> Signed-off-by:
Jens Axboe <axboe@fb.com>
-
- 15 Sep, 2016 1 commit
-
-
Christoph Hellwig authored
All drivers use the default, so provide an inline version of it. If we ever need other queue mapping we can add an optional method back, although supporting will also require major changes to the queue setup code. This provides better code generation, and better debugability as well. Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Keith Busch <keith.busch@intel.com> Signed-off-by:
Jens Axboe <axboe@fb.com>
-
- 18 Aug, 2016 1 commit
-
-
Jay Freyensee authored
Signed-off-by:
Jay Freyensee <james_p_freyensee@linux.intel.com> Signed-off-by:
Sagi Grimberg <sagi@grimberg.me>
-
- 03 Aug, 2016 1 commit
-
-
Sagi Grimberg authored
nvme_uninit_ctrl already does that for us. Note that we reordered nvme_loop_shutdown_ctrl with nvme_uninit_ctrl but its safe because we want controller uninit to happen before we shutdown the transport resources. Signed-off-by:
Sagi Grimberg <sagi@grimberg.me> Reviewed-by:
Christoph Hellwig <hch@lst.de>
-
- 05 Jul, 2016 1 commit
-
-
Christoph Hellwig authored
This patch implements adds nvme-loop which allows to access local devices exported as NVMe over Fabrics namespaces. This module can be useful for easy evaluation, testing and also feature experimentation. To createa nvme-loop device you need to configure the NVMe target to export a loop port (see the nvmetcli documentaton for that) and then connect to it using nvme connect-all -t loop which requires the very latest nvme-cli version with Fabrics support. Signed-off-by:
Jay Freyensee <james.p.freyensee@intel.com> Signed-off-by:
Ming Lin <ming.l@ssi.samsung.com> Signed-off-by:
Sagi Grimberg <sagi@grimberg.me> Signed-off-by:
Christoph Hellwig <hch@lst.de> Reviewed-by:
Steve Wise <swise@opengridcomputing.com> Signed-off-by:
Jens Axboe <axboe@fb.com>
-