Merge branch 'for-4.5/core' of git://git.kernel.dk/linux-block

Pull core block updates from Jens Axboe:
 "We don't have a lot of core changes this time around, it's mostly in
  drivers, which will come in a subsequent pull.

  The cores changes include:

   - blk-mq
        - Prep patch from Christoph, changing blk_mq_alloc_request() to
          take flags instead of just using gfp_t for sleep/nosleep.
        - Doc patch from me, clarifying the difference between legacy
          and blk-mq for timer usage.
        - Fixes from Raghavendra for memory-less numa nodes, and a reuse
          of CPU masks.

   - Cleanup from Geliang Tang, using offset_in_page() instead of open
     coding it.

   - From Ilya, rename request_queue slab to it reflects what it holds,
     and a fix for proper use of bdgrab/put.

   - A real fix for the split across stripe boundaries from Keith.  We
     yanked a broken version of this from 4.4-rc final, this one works.

   - From Mike Krinkin, emit a trace message when we split.

   - From Wei Tang, two small cleanups, not explicitly clearing memory
     that is already cleared"

* 'for-4.5/core' of git://git.kernel.dk/linux-block:
  block: use bd{grab,put}() instead of open-coding
  block: split bios to max possible length
  block: add call to split trace point
  blk-mq: Avoid memoryless numa node encoded in hctx numa_node
  blk-mq: Reuse hardware context cpumask for tags
  blk-mq: add a flags parameter to blk_mq_alloc_request
  Revert "blk-flush: Queue through IO scheduler when flush not required"
  block: clarify blk_add_timer() use case for blk-mq
  bio: use offset_in_page macro
  block: do not initialise statics to 0 or NULL
  block: do not initialise globals to 0 or NULL
  block: rename request_queue slab cache
This commit is contained in:
Linus Torvalds
2016-01-19 15:03:34 -08:00
17 fájl változott, egészen pontosan 79 új sor hozzáadva és 64 régi sor törölve

Fájl megtekintése

@@ -1041,7 +1041,7 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
struct request *req;
int ret;
req = blk_mq_alloc_request(q, write, GFP_KERNEL, false);
req = blk_mq_alloc_request(q, write, 0);
if (IS_ERR(req))
return PTR_ERR(req);
@@ -1094,7 +1094,8 @@ static int nvme_submit_async_admin_req(struct nvme_dev *dev)
struct nvme_cmd_info *cmd_info;
struct request *req;
req = blk_mq_alloc_request(dev->admin_q, WRITE, GFP_ATOMIC, true);
req = blk_mq_alloc_request(dev->admin_q, WRITE,
BLK_MQ_REQ_NOWAIT | BLK_MQ_REQ_RESERVED);
if (IS_ERR(req))
return PTR_ERR(req);
@@ -1119,7 +1120,7 @@ static int nvme_submit_admin_async_cmd(struct nvme_dev *dev,
struct request *req;
struct nvme_cmd_info *cmd_rq;
req = blk_mq_alloc_request(dev->admin_q, WRITE, GFP_KERNEL, false);
req = blk_mq_alloc_request(dev->admin_q, WRITE, 0);
if (IS_ERR(req))
return PTR_ERR(req);
@@ -1320,8 +1321,8 @@ static void nvme_abort_req(struct request *req)
if (!dev->abort_limit)
return;
abort_req = blk_mq_alloc_request(dev->admin_q, WRITE, GFP_ATOMIC,
false);
abort_req = blk_mq_alloc_request(dev->admin_q, WRITE,
BLK_MQ_REQ_NOWAIT);
if (IS_ERR(abort_req))
return;