fuse: realloc page array

Writeback caching currently allocates requests with the maximum number of
possible pages, while the actual number of pages per request depends on a
couple of factors that cannot be determined when the request is allocated
(whether page is already under writeback, whether page is contiguous with
previous pages already added to a request).

This patch allows such requests to start with no page allocation (all pages
inline) and grow the page array on demand.

If the max_pages tunable remains the default value, then this will mean
just one allocation that is the same size as before.  If the tunable is
larger, then this adds at most 3 additional memory allocations (which is
generously compensated by the improved performance from the larger
request).

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
This commit is contained in:
Miklos Szeredi
2018-10-01 10:07:06 +02:00
parent 5da784cce4
commit e52a825048
3 changed files with 56 additions and 5 deletions

View File

@@ -1827,7 +1827,13 @@ static int fuse_writepages_fill(struct page *page,
data->orig_pages[req->num_pages - 1]->index + 1 != page->index)) {
fuse_writepages_send(data);
data->req = NULL;
} else if (req && req->num_pages == req->max_pages) {
if (!fuse_req_realloc_pages(fc, req, GFP_NOFS)) {
fuse_writepages_send(data);
req = data->req = NULL;
}
}
err = -ENOMEM;
tmp_page = alloc_page(GFP_NOFS | __GFP_HIGHMEM);
if (!tmp_page)
@@ -1850,7 +1856,7 @@ static int fuse_writepages_fill(struct page *page,
struct fuse_inode *fi = get_fuse_inode(inode);
err = -ENOMEM;
req = fuse_request_alloc_nofs(fc->max_pages);
req = fuse_request_alloc_nofs(FUSE_REQ_INLINE_PAGES);
if (!req) {
__free_page(tmp_page);
goto out_unlock;