Using NFSv4.1 on RDMA should be safe, so broaden the new checks in
rpc_create().
WARN_ON_ONCE is used, matching most other WARN call sites in clnt.c.
Fixes: 39a9beab5a ("rpc: share one xps between all backchannels")
Fixes: d50039ea5e ("nfsd4/rpc: move backchannel create logic...")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: J. Bruce Fields <bfields@fieldses.org>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Export client addr/nonce, so userspace can check if a image is being
blacklisted.
Signed-off-by: Mike Christie <mchristi@redhat.com>
[idryomov@gmail.com: ceph_client_addr(), endianess fix]
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Add basic support for RBD_FEATURE_EXCLUSIVE_LOCK feature. Maintenance
operations (resize, snapshot create, etc) are offloaded to librbd via
returning -EOPNOTSUPP - librbd should request the lock and execute the
operation.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Mike Christie <mchristi@redhat.com>
Tested-by: Mike Christie <mchristi@redhat.com>
Revamp watch code to support retrying watch re-registration:
- add rbd_dev->watch_state for more robust errcb handling
- store watch cookie separately to avoid dereferencing watch_handle
which is set to NULL on unwatch
- move re-register code into a delayed work and retry re-registration
every second, unless the client is blacklisted
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Mike Christie <mchristi@redhat.com>
Tested-by: Mike Christie <mchristi@redhat.com>
Reuse ceph_mon_generic_request infrastructure for sending monitor
commands. In particular, add support for 'blacklist add' to prevent
other, non-responsive clients from making further updates.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
[idryomov@gmail.com: refactor, misc fixes throughout]
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Mike Christie <mchristi@redhat.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Add an interface for the Ceph OSD lock.lock_info method and associated
data structures.
Based heavily on code by Mike Christie <michaelc@cs.wisc.edu>.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
[idryomov@gmail.com: refactor, misc fixes throughout]
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Mike Christie <mchristi@redhat.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Add a convenience function to osd_client to send Ceph OSD
'class' ops. The interface assumes that the request and
reply data each consist of single pages.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Mike Christie <mchristi@redhat.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Add support for this Ceph OSD op, needed to support the RBD exclusive
lock feature.
Signed-off-by: Douglas Fuller <dfuller@redhat.com>
[idryomov@gmail.com: refactor, misc fixes throughout]
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Mike Christie <mchristi@redhat.com>
Reviewed-by: Alex Elder <elder@linaro.org>
David Howells says:
====================
rxrpc: Add better client conn management strategy
These two patches add a better client connection management strategy. They
need to be applied on top of the just-posted fixes.
(1) Duplicate the connection list and separate out procfs iteration from
garbage collection. This is necessary for the next patch as with that
client connections no longer appear on a single list and may not
appear on a list at all - and really don't want to be exposed to the
old garbage collector.
(Note that client conns aren't left dangling, they're also in a tree
rooted in the local endpoint so that they can be found by a user
wanting to make a new client call. Service conns do not appear in
this tree.)
(2) Implement a better lifetime management and garbage collection strategy
for client connections.
In this, a client connection can be in one of five cache states
(inactive, waiting, active, culled and idle). Limits are set on the
number of client conns that may be active at any one time and makes
users wait if they want to start a new call when there isn't capacity
available.
To make capacity available, active and idle connections can be culled,
after a short delay (to allow for retransmission). The delay is
reduced if the capacity exceeds a tunable threshold.
If there is spare capacity, client conns are permitted to hang around
a fair bit longer (tunable) so as to allow reuse of negotiated
security contexts.
After this patch, the client conn strategy is separate from that of
service conns (which continues to use the old code for the moment).
This difference in strategy is because the client side retains control
over when it allows a connection to become active, whereas the service
side has no control over when it sees a new connection or a new call
on an old connection.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells says:
====================
rxrpc: More fixes
Here are a couple of fix patches:
(1) Fix the conn-based retransmission patch posted yesterday. This breaks
if it actually has to retransmit. However, it seems the likelihood of
this happening is really low, despite the server I'm testing against
being located >3000 miles away, and sometime of the time it's handled
in the call background processor before we manage to disconnect the
call - hence why I didn't spot it.
(2) /proc/net/rxrpc_calls can cause a crash it accessed whilst a call is
being torn down. The window of opportunity is pretty small, however,
as calls don't stay in this state for long.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
During an audit for sk_filter(), we found that rx_busy_skb handling
in l2cap_sock_recv_cb() and l2cap_sock_recvmsg() looks not quite as
intended.
The assumption from commit e328140fda ("Bluetooth: Use event-driven
approach for handling ERTM receive buffer") is that errors returned
from sock_queue_rcv_skb() are due to receive buffer shortage. However,
nothing should prevent doing a setsockopt() with SO_ATTACH_FILTER on
the socket, that could drop some of the incoming skbs when handled in
sock_queue_rcv_skb().
In that case sock_queue_rcv_skb() will return with -EPERM, propagated
from sk_filter() and if in L2CAP_MODE_ERTM mode, wrong assumption was
that we failed due to receive buffer being full. From that point onwards,
due to the to-be-dropped skb being held in rx_busy_skb, we cannot make
any forward progress as rx_busy_skb is never cleared from l2cap_sock_recvmsg(),
due to the filter drop verdict over and over coming from sk_filter().
Meanwhile, in l2cap_sock_recv_cb() all new incoming skbs are being
dropped due to rx_busy_skb being occupied.
Instead, just use __sock_queue_rcv_skb() where an error really tells that
there's a receive buffer issue. Split the sk_filter() and enable it for
non-segmented modes at queuing time since at this point in time the skb has
already been through the ERTM state machine and it has been acked, so dropping
is not allowed. Instead, for ERTM and streaming mode, call sk_filter() in
l2cap_data_rcv() so the packet can be dropped before the state machine sees it.
Fixes: e328140fda ("Bluetooth: Use event-driven approach for handling ERTM receive buffer")
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Mat Martineau <mathew.j.martineau@linux.intel.com>
Acked-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
In hci_req_sync_complete the event skb is referenced in hdev->req_skb.
It is used (via hci_req_run_skb) from either __hci_cmd_sync_ev which will
pass the skb to the caller, or __hci_req_sync which leaks.
unreferenced object 0xffff880005339a00 (size 256):
comm "kworker/u3:1", pid 1011, jiffies 4294671976 (age 107.389s)
backtrace:
[<ffffffff818d89d9>] kmemleak_alloc+0x49/0xa0
[<ffffffff8116bba8>] kmem_cache_alloc+0x128/0x180
[<ffffffff8167c1df>] skb_clone+0x4f/0xa0
[<ffffffff817aa351>] hci_event_packet+0xc1/0x3290
[<ffffffff8179a57b>] hci_rx_work+0x18b/0x360
[<ffffffff810692ea>] process_one_work+0x14a/0x440
[<ffffffff81069623>] worker_thread+0x43/0x4d0
[<ffffffff8106ead4>] kthread+0xc4/0xe0
[<ffffffff818dd38f>] ret_from_fork+0x1f/0x40
[<ffffffffffffffff>] 0xffffffffffffffff
Signed-off-by: Frédéric Dalleau <frederic.dalleau@collabora.co.uk>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Improve the management and caching of client rxrpc connection objects.
From this point, client connections will be managed separately from service
connections because AF_RXRPC controls the creation and re-use of client
connections but doesn't have that luxury with service connections.
Further, there will be limits on the numbers of client connections that may
be live on a machine. No direct restriction will be placed on the number
of client calls, excepting that each client connection can support a
maximum of four concurrent calls.
Note that, for a number of reasons, we don't want to simply discard a
client connection as soon as the last call is apparently finished:
(1) Security is negotiated per-connection and the context is then shared
between all calls on that connection. The context can be negotiated
again if the connection lapses, but that involves holding up calls
whilst at least two packets are exchanged and various crypto bits are
performed - so we'd ideally like to cache it for a little while at
least.
(2) If a packet goes astray, we will need to retransmit a final ACK or
ABORT packet. To make this work, we need to keep around the
connection details for a little while.
(3) The locally held structures represent some amount of setup time, to be
weighed against their occupation of memory when idle.
To this end, the client connection cache is managed by a state machine on
each connection. There are five states:
(1) INACTIVE - The connection is not held in any list and may not have
been exposed to the world. If it has been previously exposed, it was
discarded from the idle list after expiring.
(2) WAITING - The connection is waiting for the number of client conns to
drop below the maximum capacity. Calls may be in progress upon it
from when it was active and got culled.
The connection is on the rxrpc_waiting_client_conns list which is kept
in to-be-granted order. Culled conns with waiters go to the back of
the queue just like new conns.
(3) ACTIVE - The connection has at least one call in progress upon it, it
may freely grant available channels to new calls and calls may be
waiting on it for channels to become available.
The connection is on the rxrpc_active_client_conns list which is kept
in activation order for culling purposes.
(4) CULLED - The connection got summarily culled to try and free up
capacity. Calls currently in progress on the connection are allowed
to continue, but new calls will have to wait. There can be no waiters
in this state - the conn would have to go to the WAITING state
instead.
(5) IDLE - The connection has no calls in progress upon it and must have
been exposed to the world (ie. the EXPOSED flag must be set). When it
expires, the EXPOSED flag is cleared and the connection transitions to
the INACTIVE state.
The connection is on the rxrpc_idle_client_conns list which is kept in
order of how soon they'll expire.
A connection in the ACTIVE or CULLED state must have at least one active
call upon it; if in the WAITING state it may have active calls upon it;
other states may not have active calls.
As long as a connection remains active and doesn't get culled, it may
continue to process calls - even if there are connections on the wait
queue. This simplifies things a bit and reduces the amount of checking we
need do.
There are a couple flags of relevance to the cache:
(1) EXPOSED - The connection ID got exposed to the world. If this flag is
set, an extra ref is added to the connection preventing it from being
reaped when it has no calls outstanding. This flag is cleared and the
ref dropped when a conn is discarded from the idle list.
(2) DONT_REUSE - The connection should be discarded as soon as possible and
should not be reused.
This commit also provides a number of new settings:
(*) /proc/net/rxrpc/max_client_conns
The maximum number of live client connections. Above this number, new
connections get added to the wait list and must wait for an active
conn to be culled. Culled connections can be reused, but they will go
to the back of the wait list and have to wait.
(*) /proc/net/rxrpc/reap_client_conns
If the number of desired connections exceeds the maximum above, the
active connection list will be culled until there are only this many
left in it.
(*) /proc/net/rxrpc/idle_conn_expiry
The normal expiry time for a client connection, provided there are
fewer than reap_client_conns of them around.
(*) /proc/net/rxrpc/idle_conn_fast_expiry
The expedited expiry time, used when there are more than
reap_client_conns of them around.
Note that I combined the Tx wait queue with the channel grant wait queue to
save space as only one of these should be in use at once.
Note also that, for the moment, the service connection cache still uses the
old connection management code.
Signed-off-by: David Howells <dhowells@redhat.com>
The main connection list is used for two independent purposes: primarily it
is used to find connections to reap and secondarily it is used to list
connections in procfs.
Split the procfs list out from the reap list. This allows us to stop using
the reap list for client connections when they acquire a separate
management strategy from service collections.
The client connections will not be on a management single list, and sometimes
won't be on a management list at all. This doesn't leave them floating,
however, as they will also be on an rb-tree rooted on the socket so that the
socket can find them to dispatch calls.
Signed-off-by: David Howells <dhowells@redhat.com>
Make /proc/net/rxrpc_calls safer by stashing a copy of the peer pointer in
the rxrpc_call struct and checking in the show routine that the peer
pointer, the socket pointer and the local pointer obtained from the socket
pointer aren't NULL before we use them.
Signed-off-by: David Howells <dhowells@redhat.com>
If a duplicate packet comes in for a call that has just completed on a
connection's channel then there will be an oops in the data_ready handler
because it tries to examine the connection struct via a call struct (which
we don't have - the pointer is unset).
Since the connection struct pointer is available to us, go direct instead.
Also, the ACK packet to be retransmitted needs three octets of padding
between the soft ack list and the ackinfo.
Fixes: 18bfeba50d ("rxrpc: Perform terminal call ACK/ABORT retransmission from conn processor")
Signed-off-by: David Howells <dhowells@redhat.com>
After commit 5b8ef3415a
("xfrm: Remove ancient sleeping when the SA is in acquire state")
gc does not need any per-netns data anymore.
As far as gc is concerned all state structs are the same, so we
can use a global work struct for it.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
An earlier patch accidentally replaced a write_lock_bh
with a spin_unlock_bh. Fix this by using spin_lock_bh
instead.
Fixes: 9d0380df62 ("xfrm: policy: convert policy_lock to spinlock")
Signed-off-by: Steffen Klassert <steffen.klassert@secunet.com>
Now RCU lookups of IPv6 TCP sockets no longer dereference pinet6,
we do not need tcp_v6_clear_sk() anymore.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Since we no longer use SLAB_DESTROY_BY_RCU for UDP,
we do not need sk_prot_clear_portaddr_nulls() helper.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Now RCU lookups of ipv6 udp sockets no longer dereference
pinet6 field, we can get rid of udp_v6_clear_sk() helper.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
This implements SOCK_DESTROY for UDP sockets similar to what was done
for TCP with commit c1e64e298b ("net: diag: Support destroying TCP
sockets.") A process with a UDP socket targeted for destroy is awakened
and recvmsg fails with ECONNABORTED.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
inet_diag_find_one_icsk takes a reference to a socket that is not
released if sock_diag_destroy returns an error. Fix by changing
tcp_diag_destroy to manage the refcnt for all cases and remove
the sock_put calls from tcp_abort.
Fixes: c1e64e298b ("net: diag: Support destroying TCP sockets")
Reported-by: Lorenzo Colitti <lorenzo@google.com>
Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
After commit ca065d0cf8 ("udp: no longer use SLAB_DESTROY_BY_RCU")
we do not need this special allocation mode anymore, even if it is
harmless.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
The function sctp_diag_dump_one() currently performs a memcpy()
of 64 bytes from a 16 byte field into another 16 byte field. Fix
by using correct size, use sizeof to obtain correct size instead
of using a hard-coded constant.
Fixes: 8f840e47f1 ("sctp: add the sctp_diag.c file")
Signed-off-by: Lance Richardson <lrichard@redhat.com>
Reviewed-by: Xin Long <lucien.xin@gmail.com>
Acked-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells says:
====================
rxrpc: Miscellaneous improvements
Here are some improvements that are part of the AF_RXRPC rewrite. They
need to be applied on top of the just posted cleanups.
(1) Set the connection expiry on the connection becoming idle when its
last currently active call completes rather than each time put is
called.
This means that the connection isn't held open by retransmissions,
pings and duplicate packets. Future patches will limit the number of
live connections that the kernel will support, so making sure that old
connections don't overstay their welcome is necessary.
(2) Calculate packet serial skew in the UDP data_ready callback rather
than in the call processor on a work queue. Deferring it like this
causes the skew to be elevated by further packets coming in before we
get to make the calculation.
(3) Move retransmission of the terminal ACK or ABORT packet for a
connection to the connection processor, using the terminal state
cached in the rxrpc_connection struct. This means that once last_call
is set in a channel to the current call's ID, no more packets will be
routed to that rxrpc_call struct.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
David Howells says:
====================
rxrpc: Cleanups
Here are some cleanups for the AF_RXRPC rewrite:
(1) Remove some unused bits.
(2) Call releasing on socket closure is now done in the order in which
calls progress through the phases so that we don't miss a call
actively moving list.
(3) The rxrpc_call struct's channel number field is redundant and replaced
with accesses to the masked off cid field instead.
(4) Use a tracepoint for socket buffer accounting rather than printks.
Unfortunately, since this would require currently non-existend
arch-specific help to divine the current instruction location, the
accounting functions are moved out of line so that
__builtin_return_address() can be used.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Since the features bit field has bits for internal only use as well, it
may happen that the kernel exports RTAX_FEATURES attribute with zero
value which is pointless.
Fix this by making sure the attribute is added only if the exported
value is non-zero.
Signed-off-by: Phil Sutter <phil@nwl.cc>
Signed-off-by: David S. Miller <davem@davemloft.net>
TFO_SERVER_WO_SOCKOPT2 was intended for debugging purposes during
Fast Open development. Remove this config option and also
update/clean-up the documentation of the Fast Open sysctl.
Reported-by: Piotr Jurkiewicz <piotr.jerzy.jurkiewicz@gmail.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Use PPP_ALLSTATIONS, PPP_UI, and SEND_SHUTDOWN instead of 0xff,
0x03, and 2 separately.
Signed-off-by: Gao Feng <fgao@ikuai8.com>
Acked-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Laura tracked poll() [and friends] regression caused by commit
e6afc8ace6 ("udp: remove headers from UDP packets before queueing")
udp_poll() needs to know if there is a valid packet in receive queue,
even if its payload length is 0.
Change first_packet_length() to return an signed int, and use -1
as the indication of an empty queue.
Fixes: e6afc8ace6 ("udp: remove headers from UDP packets before queueing")
Reported-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Tested-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Lock the lower socket in kcm_unattach. Release during call to strp_done
since that function cancels the RX timers and work queue with sync.
Also added some status information in psock reporting.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
When the upper layer unpauses a stream parser connection we need to
queue rx_work to make sure no events are missed.
Signed-off-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently, if you add a base chain whose name clashes with an existing
non-base chain, nf_tables doesn't complain about this. Similarly, if you
update the chain type, the hook number and priority.
With this patch, nf_tables bails out in case any of this unsupported
operations occur by returning EBUSY.
# nft add table x
# nft add chain x y
# nft add chain x y { type nat hook input priority 0\; }
<cmdline>:1:1-49: Error: Could not process rule: Device or resource busy
add chain x y { type nat hook input priority 0; }
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Introduce a new function to wrap the code that parses the chain hook
configuration so we can reuse this code to validate chain updates.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Perform terminal call ACK/ABORT retransmission in the connection processor
rather than in the call processor. With this change, once last_call is
set, no more incoming packets will be routed to the corresponding call or
any earlier calls on that channel (call IDs must only increase on a channel
on a connection).
Further, if a packet's callNumber is before the last_call ID or a packet is
aimed at successfully completed service call then that packet is discarded
and ignored.
Signed-off-by: David Howells <dhowells@redhat.com>
Calculate the serial number skew in the data_ready handler when a packet
has been received and a connection looked up. The skew is cached in the
sk_buff's priority field.
The connection highest received serial number is updated at this time also.
This can be done without locks or atomic instructions because, at this
point, the code is serialised by the socket.
This generates more accurate skew data because if the packet is offloaded
to a work queue before this is determined, more packets may come in,
bumping the highest serial number and thereby increasing the apparent skew.
This also removes some unnecessary atomic ops.
Signed-off-by: David Howells <dhowells@redhat.com>
Set the connection expiry time when a connection becomes idle rather than
doing this in rxrpc_put_connection(). This makes the put path more
efficient (it is likely to be called occasionally whilst a connection has
outstanding calls because active workqueue items needs to be given a ref).
The time is also preset in the connection allocator in case the connection
never gets used.
Signed-off-by: David Howells <dhowells@redhat.com>
Drop the channel number (channel) field from the rxrpc_call struct to
reduce the size of the call struct. The field is redundant: if the call is
attached to a connection, the channel can be obtained from there by AND'ing
with RXRPC_CHANNELMASK.
Signed-off-by: David Howells <dhowells@redhat.com>
When clearing a socket, we should clear the securing-in-progress list
first, then the accept queue and last the main call tree because that's the
order in which a call progresses. Not that a call should move from the
accept queue to the main tree whilst we're shutting down a socket, but it a
call could possibly move from sequreq to acceptq whilst we're clearing up.
Signed-off-by: David Howells <dhowells@redhat.com>
Do a little tidying of the rxrpc_call struct:
(1) in_clientflag is no longer compared against the value that's in the
packet, so keeping it in this form isn't necessary. Use a flag in
flags instead and provide a pair of wrapper functions.
(2) We don't read the epoch value, so that can go.
(3) Move what remains of the data that were used for hashing up in the
struct to be with the channel number.
(4) Get rid of the local pointer. We can get at this via the socket
struct and we only use this in the procfs viewer.
Signed-off-by: David Howells <dhowells@redhat.com>
sk_user_data mismatch between what kcm expects (psock) and what strparser expects (strparser).
Queued rx_work, for example calling strp_check_rcv after socket buffer changes, will never complete.
sk_user_data is unused in strparser, so just remove the check.
Signed-off-by: Dave Watson <davejwatson@fb.com>
Acked-by: Tom Herbert <tom@herbertland.com>
Signed-off-by: David S. Miller <davem@davemloft.net>