It's better to use int type for loop index. For consistency, the name
of local variable for the number of data block should be plural.
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
As a result of former commits, post operation to data block count for
cases without CIP_DBC_IS_END_EVENT can be done just with
data_block_counter member of amdtp_stream structure.
This commit adds code refactoring to obsolete local variable for
data block counter.
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
When a parser for CIP header returns -EAGAIN, no extra care is needed
to probe tracepoints event.
This commit adds code refactoring for the error path.
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
For IT context, tracepoints event is probed after calculating next data
block counter. This brings difference of data block counter between
the probed event and actual isochronous packet.
This commit fixes it.
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
For IR context, ALSA IEC 61883-1/6 engine uses initial value of data
block counter as UINT_MAX, to detect first isochronous packet in the
middle of packet streaming.
At present, when CIP_DBC_IS_END_EVENT is not used (i.e. for drivers except
for ALSA fireworks driver), the initial value is used as is for
tracepoints event. However, the engine can detect the value of dbc field
in the payload of first isochronous packet and the value should be assigned
to the event.
This commit fixes the bug.
Fixes: 76864868db ("ALSA: firewire-lib: cache next data_block_counter after probing tracepoints event for IR context")
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
For IR context, ALSA IEC 61883-1/6 engine uses initial value of data
block counter as UINT_MAX, to detect first isochronous packet in the
middle of packet streaming.
At present, when CIP_NO_HEADER is used (i.e. for ALSA fireface driver),
the initial value is used for tracepoints event. 0x00 should be
for the event when the initial value is UINT_MAX because isochronous
packets with CIP_NO_HEADER option has no field for data block count.
This commit fixes the bug.
Fixes: 76864868db ("ALSA: firewire-lib: cache next data_block_counter after probing tracepoints event for IR context")
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Although CIP header is handled as context header, the length of isochronous
packet includes two quadlets for its payload. In tracepoints event the
value of payload_quadlets should includes the two quadlets. But at present
it doesn't.
This commit fixes the bug.
Fixes: b18f0cfaf1 ("ALSA: firewire-lib: use 8 byte packet header for IT context to separate CIP header from CIP payload")
Signed-off-by: Takashi Sakamoto <o-takashi@sakamocchi.jp>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
ASoC: Updates for v5.3
This is a very big update, mainly thanks to Morimoto-san's refactoring
work and some fairly large new drivers.
- Lots more work on moving towards a component based framework from
Morimoto-san.
- Support for force disconnecting muxes from Jerome Brunet.
- New drivers for Cirrus Logic CS47L35, CS47L85 and CS47L90, Conexant
CX2072X, Realtek RT1011 and RT1308.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Commit e450f4d1a5 ("ceph: pass inclusive lend parameter to
filemap_write_and_wait_range()") fixed the end offset parameter used to
call filemap_write_and_wait_range and invalidate_inode_pages2_range.
Unfortunately it missed truncate_inode_pages_range, introducing a
regression that is easily detected by xfstest generic/130.
The problem is that when doing direct IO it is possible that an extra page
is truncated from the page cache when the end offset is page aligned.
This can cause data loss if that page hasn't been sync'ed to the OSDs.
While there, change code to use PAGE_ALIGN macro instead.
Cc: stable@vger.kernel.org
Fixes: e450f4d1a5 ("ceph: pass inclusive lend parameter to filemap_write_and_wait_range()")
Signed-off-by: Luis Henriques <lhenriques@suse.com>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
ceph_drop_inode() implementation is not any different from the generic
function, thus there's no point in keeping it around.
Signed-off-by: Luis Henriques <lhenriques@suse.com>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Having granularity set to 1us results in having inode timestamps with a
accurancy different from the fuse client (i.e. atime, ctime and mtime will
always end with '000'). This patch normalizes this behaviour and sets the
granularity to 1.
Signed-off-by: Luis Henriques <lhenriques@suse.com>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Reviewed-by: Sage Weil <sage@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Zheng wants to be able to spend more time working on the MDS, so I've
volunteered to take over for him as the CephFS kernel client maintainer.
Signed-off-by: Jeff Layton <jlayton@kernel.org>
Acked-by: "Yan, Zheng" <zyan@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
setallochint is really only useful on object creation. Continue
hinting unconditionally if object map cannot be used.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
Speed up reads, discards and zeroouts through RBD_OBJ_FLAG_MAY_EXIST
and RBD_OBJ_FLAG_NOOP_FOR_NONEXISTENT based on object map.
Invalid object maps are not trusted, but still updated. Note that we
never iterate, resize or invalidate object maps. If object-map feature
is enabled but object map fails to load, we just fail the requester
(either "rbd map" or I/O, by way of post-acquire action).
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Snapshot object map will be loaded in rbd_dev_image_probe(), so we need
to know snapshot's size (as opposed to HEAD's size) sooner.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
We already have one exported wrapper around it for extent.osd_data and
rbd_object_map_update_finish() needs another one for cls.request_data.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
This will be used for loading object map. rbd_obj_read_sync() isn't
suitable because object map must be accessed through class methods.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
This time for rbd object map. Object maps are limited in size to
256000000 objects, two bits per object.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
rbd_wait_state_locked() is built around rbd_dev->lock_waitq and blocks
rbd worker threads while waiting for the lock, potentially impacting
other rbd devices. There is no good way to pass an error code into
image request state machines when acquisition fails, hence the use of
RBD_DEV_FLAG_BLACKLISTED for everything and various other issues.
Introduce rbd_dev->acquiring_list and move acquisition into image
request state machine. Use rbd_img_schedule() for kicking and passing
error codes. No blocking occurs while waiting for the lock, but
rbd_dev->lock_rwsem is still held across lock, unlock and set_cookie
calls.
Always acquire the lock on "rbd map" to avoid associating the latency
of acquiring the lock with the first I/O request.
A slight regression is that lock_timeout is now respected only if lock
acquisition is triggered by "rbd map" and not by I/O. This is somewhat
compensated by the fact that we no longer block if the peer refuses to
release lock -- I/O is failed with EROFS right away.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
Syncing OSD requests doesn't really work. A single image request may
be comprised of multiple object requests, each of which can go through
a series of OSD requests (original, copyups, etc). On top of that, the
OSD cliest may be shared with other rbd devices.
What we want is to ensure that all in-flight image requests complete.
Introduce rbd_dev->running_list and block in RBD_LOCK_STATE_RELEASING
until that happens. New OSD requests may be started during this time.
Note that __rbd_img_handle_request() acquires rbd_dev->lock_rwsem only
if need_exclusive_lock() returns true. This avoids a deadlock similar
to the one outlined in the previous commit between unlock and I/O that
doesn't require lock, such as a read with object-map feature disabled.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
Quiesce exclusive lock at the top of rbd_reacquire_lock() instead
of only when ceph_cls_set_cookie() fails. This avoids a deadlock on
rbd_dev->lock_rwsem.
If rbd_dev->lock_rwsem is needed for I/O completion, set_cookie can
hang ceph-msgr worker thread if set_cookie reply ends up behind an I/O
reply, because, like lock and unlock requests, set_cookie is sent and
waited upon with rbd_dev->lock_rwsem held for write.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
Both write and copyup paths will get more complex with object map.
Factor copyup code out into a separate state machine.
While at it, take advantage of obj_req->osd_reqs list and issue empty
and current snapc OSD requests together, one after another.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
Following submission, move initial OSD request allocation into object
request state machines. Everything that has to do with OSD requests is
now handled inside the state machine, all __rbd_img_fill_request() has
left is initialization.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
With obj_req->xferred removed, obj_req->ex.oe_off and obj_req->ex.oe_len
can be updated if required for alignment. Previously the new offset and
length weren't stored anywhere beyond rbd_obj_setup_discard().
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
Since the dawn of time it had been assumed that a single object request
spawns a single OSD request. This is already impacting copyup: instead
of sending empty and current snapc copyups together, we wait for empty
snapc OSD request to complete in order to reassign obj_req->osd_req
with current snapc OSD request. Looking further, updating potentially
hundreds of snapshot object maps serially is a non-starter.
Replace obj_req->osd_req pointer with obj_req->osd_reqs list. Use
osd_req->r_private_item for linkage.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
This list item remained from when we had safe and unsafe replies
(commit vs ack). It has since become a private list item for use by
clients.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Make it possible to schedule image requests on a workqueue. This fixes
parent chain recursion added in the previous commit and lays the ground
for exclusive lock wait/wake improvements.
The "wait for pending subrequests and report first nonzero result" code
is generalized to be used by object request state machine.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
Start eliminating asymmetry where the initial OSD request is allocated
and submitted from outside the state machine, making error handling and
restarts harder than they could be. This commit deals with submission,
a commit that deals with allocation will follow.
Note that this commit adds parent chain recursion on the submission
side:
rbd_img_request_submit
rbd_obj_handle_request
__rbd_obj_handle_request
rbd_obj_handle_read
rbd_obj_handle_write_guard
rbd_obj_read_from_parent
rbd_img_request_submit
This will be fixed in the next commit.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>
In preparation for moving OSD request allocation and submission into
object request state machines, get rid of RBD_OBJ_WRITE_{FLAT,GUARD}.
We would need to start in a new state, whether the request is guarded
or not. Unify them into RBD_OBJ_WRITE_OBJECT and pass guard info
through obj_req->flags.
While at it, make our ENOENT handling a little more precise: only hide
ENOENT when it is actually expected, that is on delete.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Dongsheng Yang <dongsheng.yang@easystack.cn>