Merge tag 'net-next-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski: - Add redirect_neigh() BPF packet redirect helper, allowing to limit stack traversal in common container configs and improving TCP back-pressure. Daniel reports ~10Gbps => ~15Gbps single stream TCP performance gain. - Expand netlink policy support and improve policy export to user space. (Ge)netlink core performs request validation according to declared policies. Expand the expressiveness of those policies (min/max length and bitmasks). Allow dumping policies for particular commands. This is used for feature discovery by user space (instead of kernel version parsing or trial and error). - Support IGMPv3/MLDv2 multicast listener discovery protocols in bridge. - Allow more than 255 IPv4 multicast interfaces. - Add support for Type of Service (ToS) reflection in SYN/SYN-ACK packets of TCPv6. - In Multi-patch TCP (MPTCP) support concurrent transmission of data on multiple subflows in a load balancing scenario. Enhance advertising addresses via the RM_ADDR/ADD_ADDR options. - Support SMC-Dv2 version of SMC, which enables multi-subnet deployments. - Allow more calls to same peer in RxRPC. - Support two new Controller Area Network (CAN) protocols - CAN-FD and ISO 15765-2:2016. - Add xfrm/IPsec compat layer, solving the 32bit user space on 64bit kernel problem. - Add TC actions for implementing MPLS L2 VPNs. - Improve nexthop code - e.g. handle various corner cases when nexthop objects are removed from groups better, skip unnecessary notifications and make it easier to offload nexthops into HW by converting to a blocking notifier. - Support adding and consuming TCP header options by BPF programs, opening the doors for easy experimental and deployment-specific TCP option use. - Reorganize TCP congestion control (CC) initialization to simplify life of TCP CC implemented in BPF. - Add support for shipping BPF programs with the kernel and loading them early on boot via the User Mode Driver mechanism, hence reusing all the user space infra we have. - Support sleepable BPF programs, initially targeting LSM and tracing. - Add bpf_d_path() helper for returning full path for given 'struct path'. - Make bpf_tail_call compatible with bpf-to-bpf calls. - Allow BPF programs to call map_update_elem on sockmaps. - Add BPF Type Format (BTF) support for type and enum discovery, as well as support for using BTF within the kernel itself (current use is for pretty printing structures). - Support listing and getting information about bpf_links via the bpf syscall. - Enhance kernel interfaces around NIC firmware update. Allow specifying overwrite mask to control if settings etc. are reset during update; report expected max time operation may take to users; support firmware activation without machine reboot incl. limits of how much impact reset may have (e.g. dropping link or not). - Extend ethtool configuration interface to report IEEE-standard counters, to limit the need for per-vendor logic in user space. - Adopt or extend devlink use for debug, monitoring, fw update in many drivers (dsa loop, ice, ionic, sja1105, qed, mlxsw, mv88e6xxx, dpaa2-eth). - In mlxsw expose critical and emergency SFP module temperature alarms. Refactor port buffer handling to make the defaults more suitable and support setting these values explicitly via the DCBNL interface. - Add XDP support for Intel's igb driver. - Support offloading TC flower classification and filtering rules to mscc_ocelot switches. - Add PTP support for Marvell Octeontx2 and PP2.2 hardware, as well as fixed interval period pulse generator and one-step timestamping in dpaa-eth. - Add support for various auth offloads in WiFi APs, e.g. SAE (WPA3) offload. - Add Lynx PHY/PCS MDIO module, and convert various drivers which have this HW to use it. Convert mvpp2 to split PCS. - Support Marvell Prestera 98DX3255 24-port switch ASICs, as well as 7-port Mediatek MT7531 IP. - Add initial support for QCA6390 and IPQ6018 in ath11k WiFi driver, and wcn3680 support in wcn36xx. - Improve performance for packets which don't require much offloads on recent Mellanox NICs by 20% by making multiple packets share a descriptor entry. - Move chelsio inline crypto drivers (for TLS and IPsec) from the crypto subtree to drivers/net. Move MDIO drivers out of the phy directory. - Clean up a lot of W=1 warnings, reportedly the actively developed subsections of networking drivers should now build W=1 warning free. - Make sure drivers don't use in_interrupt() to dynamically adapt their code. Convert tasklets to use new tasklet_setup API (sadly this conversion is not yet complete). * tag 'net-next-5.10' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2583 commits) Revert "bpfilter: Fix build error with CONFIG_BPFILTER_UMH" net, sockmap: Don't call bpf_prog_put() on NULL pointer bpf, selftest: Fix flaky tcp_hdr_options test when adding addr to lo bpf, sockmap: Add locking annotations to iterator netfilter: nftables: allow re-computing sctp CRC-32C in 'payload' statements net: fix pos incrementment in ipv6_route_seq_next net/smc: fix invalid return code in smcd_new_buf_create() net/smc: fix valid DMBE buffer sizes net/smc: fix use-after-free of delayed events bpfilter: Fix build error with CONFIG_BPFILTER_UMH cxgb4/ch_ipsec: Replace the module name to ch_ipsec from chcr net: sched: Fix suspicious RCU usage while accessing tcf_tunnel_info bpf: Fix register equivalence tracking. rxrpc: Fix loss of final ack on shutdown rxrpc: Fix bundle counting for exclusive connections netfilter: restore NF_INET_NUMHOOKS ibmveth: Identify ingress large send packets. ibmveth: Switch order of ibmveth_helper calls. cxgb4: handle 4-tuple PEDIT to NAT mode translation selftests: Add VRF route leaking tests ...
This commit is contained in:
@@ -5,6 +5,7 @@ CFLAGS_core.o += $(call cc-disable-warning, override-init)
|
||||
obj-$(CONFIG_BPF_SYSCALL) += syscall.o verifier.o inode.o helpers.o tnum.o bpf_iter.o map_iter.o task_iter.o prog_iter.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += hashtab.o arraymap.o percpu_freelist.o bpf_lru_list.o lpm_trie.o map_in_map.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += local_storage.o queue_stack_maps.o ringbuf.o
|
||||
obj-${CONFIG_BPF_LSM} += bpf_inode_storage.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += disasm.o
|
||||
obj-$(CONFIG_BPF_JIT) += trampoline.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += btf.o
|
||||
@@ -12,6 +13,7 @@ obj-$(CONFIG_BPF_JIT) += dispatcher.o
|
||||
ifeq ($(CONFIG_NET),y)
|
||||
obj-$(CONFIG_BPF_SYSCALL) += devmap.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += cpumap.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += bpf_local_storage.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += offload.o
|
||||
obj-$(CONFIG_BPF_SYSCALL) += net_namespace.o
|
||||
endif
|
||||
@@ -29,3 +31,4 @@ ifeq ($(CONFIG_BPF_JIT),y)
|
||||
obj-$(CONFIG_BPF_SYSCALL) += bpf_struct_ops.o
|
||||
obj-${CONFIG_BPF_LSM} += bpf_lsm.o
|
||||
endif
|
||||
obj-$(CONFIG_BPF_PRELOAD) += preload/
|
||||
|
@@ -10,11 +10,13 @@
|
||||
#include <linux/filter.h>
|
||||
#include <linux/perf_event.h>
|
||||
#include <uapi/linux/btf.h>
|
||||
#include <linux/rcupdate_trace.h>
|
||||
|
||||
#include "map_in_map.h"
|
||||
|
||||
#define ARRAY_CREATE_FLAG_MASK \
|
||||
(BPF_F_NUMA_NODE | BPF_F_MMAPABLE | BPF_F_ACCESS_MASK)
|
||||
(BPF_F_NUMA_NODE | BPF_F_MMAPABLE | BPF_F_ACCESS_MASK | \
|
||||
BPF_F_PRESERVE_ELEMS | BPF_F_INNER_MAP)
|
||||
|
||||
static void bpf_array_free_percpu(struct bpf_array *array)
|
||||
{
|
||||
@@ -60,7 +62,11 @@ int array_map_alloc_check(union bpf_attr *attr)
|
||||
return -EINVAL;
|
||||
|
||||
if (attr->map_type != BPF_MAP_TYPE_ARRAY &&
|
||||
attr->map_flags & BPF_F_MMAPABLE)
|
||||
attr->map_flags & (BPF_F_MMAPABLE | BPF_F_INNER_MAP))
|
||||
return -EINVAL;
|
||||
|
||||
if (attr->map_type != BPF_MAP_TYPE_PERF_EVENT_ARRAY &&
|
||||
attr->map_flags & BPF_F_PRESERVE_ELEMS)
|
||||
return -EINVAL;
|
||||
|
||||
if (attr->value_size > KMALLOC_MAX_SIZE)
|
||||
@@ -208,7 +214,7 @@ static int array_map_direct_value_meta(const struct bpf_map *map, u64 imm,
|
||||
}
|
||||
|
||||
/* emit BPF instructions equivalent to C code of array_map_lookup_elem() */
|
||||
static u32 array_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
|
||||
static int array_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
|
||||
{
|
||||
struct bpf_array *array = container_of(map, struct bpf_array, map);
|
||||
struct bpf_insn *insn = insn_buf;
|
||||
@@ -217,6 +223,9 @@ static u32 array_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
|
||||
const int map_ptr = BPF_REG_1;
|
||||
const int index = BPF_REG_2;
|
||||
|
||||
if (map->map_flags & BPF_F_INNER_MAP)
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
*insn++ = BPF_ALU64_IMM(BPF_ADD, map_ptr, offsetof(struct bpf_array, value));
|
||||
*insn++ = BPF_LDX_MEM(BPF_W, ret, index, 0);
|
||||
if (!map->bypass_spec_v1) {
|
||||
@@ -487,6 +496,15 @@ static int array_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
|
||||
vma->vm_pgoff + pgoff);
|
||||
}
|
||||
|
||||
static bool array_map_meta_equal(const struct bpf_map *meta0,
|
||||
const struct bpf_map *meta1)
|
||||
{
|
||||
if (!bpf_map_meta_equal(meta0, meta1))
|
||||
return false;
|
||||
return meta0->map_flags & BPF_F_INNER_MAP ? true :
|
||||
meta0->max_entries == meta1->max_entries;
|
||||
}
|
||||
|
||||
struct bpf_iter_seq_array_map_info {
|
||||
struct bpf_map *map;
|
||||
void *percpu_value_buf;
|
||||
@@ -625,6 +643,7 @@ static const struct bpf_iter_seq_info iter_seq_info = {
|
||||
|
||||
static int array_map_btf_id;
|
||||
const struct bpf_map_ops array_map_ops = {
|
||||
.map_meta_equal = array_map_meta_equal,
|
||||
.map_alloc_check = array_map_alloc_check,
|
||||
.map_alloc = array_map_alloc,
|
||||
.map_free = array_map_free,
|
||||
@@ -647,6 +666,7 @@ const struct bpf_map_ops array_map_ops = {
|
||||
|
||||
static int percpu_array_map_btf_id;
|
||||
const struct bpf_map_ops percpu_array_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc_check = array_map_alloc_check,
|
||||
.map_alloc = array_map_alloc,
|
||||
.map_free = array_map_free,
|
||||
@@ -888,6 +908,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
|
||||
struct bpf_prog *old,
|
||||
struct bpf_prog *new)
|
||||
{
|
||||
u8 *old_addr, *new_addr, *old_bypass_addr;
|
||||
struct prog_poke_elem *elem;
|
||||
struct bpf_array_aux *aux;
|
||||
|
||||
@@ -908,12 +929,13 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
|
||||
* there could be danger of use after free otherwise.
|
||||
* 2) Initially when we start tracking aux, the program
|
||||
* is not JITed yet and also does not have a kallsyms
|
||||
* entry. We skip these as poke->ip_stable is not
|
||||
* active yet. The JIT will do the final fixup before
|
||||
* setting it stable. The various poke->ip_stable are
|
||||
* successively activated, so tail call updates can
|
||||
* arrive from here while JIT is still finishing its
|
||||
* final fixup for non-activated poke entries.
|
||||
* entry. We skip these as poke->tailcall_target_stable
|
||||
* is not active yet. The JIT will do the final fixup
|
||||
* before setting it stable. The various
|
||||
* poke->tailcall_target_stable are successively
|
||||
* activated, so tail call updates can arrive from here
|
||||
* while JIT is still finishing its final fixup for
|
||||
* non-activated poke entries.
|
||||
* 3) On program teardown, the program's kallsym entry gets
|
||||
* removed out of RCU callback, but we can only untrack
|
||||
* from sleepable context, therefore bpf_arch_text_poke()
|
||||
@@ -930,7 +952,7 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
|
||||
* 5) Any other error happening below from bpf_arch_text_poke()
|
||||
* is a unexpected bug.
|
||||
*/
|
||||
if (!READ_ONCE(poke->ip_stable))
|
||||
if (!READ_ONCE(poke->tailcall_target_stable))
|
||||
continue;
|
||||
if (poke->reason != BPF_POKE_REASON_TAIL_CALL)
|
||||
continue;
|
||||
@@ -938,12 +960,39 @@ static void prog_array_map_poke_run(struct bpf_map *map, u32 key,
|
||||
poke->tail_call.key != key)
|
||||
continue;
|
||||
|
||||
ret = bpf_arch_text_poke(poke->ip, BPF_MOD_JUMP,
|
||||
old ? (u8 *)old->bpf_func +
|
||||
poke->adj_off : NULL,
|
||||
new ? (u8 *)new->bpf_func +
|
||||
poke->adj_off : NULL);
|
||||
BUG_ON(ret < 0 && ret != -EINVAL);
|
||||
old_bypass_addr = old ? NULL : poke->bypass_addr;
|
||||
old_addr = old ? (u8 *)old->bpf_func + poke->adj_off : NULL;
|
||||
new_addr = new ? (u8 *)new->bpf_func + poke->adj_off : NULL;
|
||||
|
||||
if (new) {
|
||||
ret = bpf_arch_text_poke(poke->tailcall_target,
|
||||
BPF_MOD_JUMP,
|
||||
old_addr, new_addr);
|
||||
BUG_ON(ret < 0 && ret != -EINVAL);
|
||||
if (!old) {
|
||||
ret = bpf_arch_text_poke(poke->tailcall_bypass,
|
||||
BPF_MOD_JUMP,
|
||||
poke->bypass_addr,
|
||||
NULL);
|
||||
BUG_ON(ret < 0 && ret != -EINVAL);
|
||||
}
|
||||
} else {
|
||||
ret = bpf_arch_text_poke(poke->tailcall_bypass,
|
||||
BPF_MOD_JUMP,
|
||||
old_bypass_addr,
|
||||
poke->bypass_addr);
|
||||
BUG_ON(ret < 0 && ret != -EINVAL);
|
||||
/* let other CPUs finish the execution of program
|
||||
* so that it will not possible to expose them
|
||||
* to invalid nop, stack unwind, nop state
|
||||
*/
|
||||
if (!ret)
|
||||
synchronize_rcu();
|
||||
ret = bpf_arch_text_poke(poke->tailcall_target,
|
||||
BPF_MOD_JUMP,
|
||||
old_addr, NULL);
|
||||
BUG_ON(ret < 0 && ret != -EINVAL);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1003,6 +1052,11 @@ static void prog_array_map_free(struct bpf_map *map)
|
||||
fd_array_map_free(map);
|
||||
}
|
||||
|
||||
/* prog_array->aux->{type,jited} is a runtime binding.
|
||||
* Doing static check alone in the verifier is not enough.
|
||||
* Thus, prog_array_map cannot be used as an inner_map
|
||||
* and map_meta_equal is not implemented.
|
||||
*/
|
||||
static int prog_array_map_btf_id;
|
||||
const struct bpf_map_ops prog_array_map_ops = {
|
||||
.map_alloc_check = fd_array_map_alloc_check,
|
||||
@@ -1090,6 +1144,9 @@ static void perf_event_fd_array_release(struct bpf_map *map,
|
||||
struct bpf_event_entry *ee;
|
||||
int i;
|
||||
|
||||
if (map->map_flags & BPF_F_PRESERVE_ELEMS)
|
||||
return;
|
||||
|
||||
rcu_read_lock();
|
||||
for (i = 0; i < array->map.max_entries; i++) {
|
||||
ee = READ_ONCE(array->ptrs[i]);
|
||||
@@ -1099,11 +1156,19 @@ static void perf_event_fd_array_release(struct bpf_map *map,
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
static void perf_event_fd_array_map_free(struct bpf_map *map)
|
||||
{
|
||||
if (map->map_flags & BPF_F_PRESERVE_ELEMS)
|
||||
bpf_fd_array_map_clear(map);
|
||||
fd_array_map_free(map);
|
||||
}
|
||||
|
||||
static int perf_event_array_map_btf_id;
|
||||
const struct bpf_map_ops perf_event_array_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc_check = fd_array_map_alloc_check,
|
||||
.map_alloc = array_map_alloc,
|
||||
.map_free = fd_array_map_free,
|
||||
.map_free = perf_event_fd_array_map_free,
|
||||
.map_get_next_key = array_map_get_next_key,
|
||||
.map_lookup_elem = fd_array_map_lookup_elem,
|
||||
.map_delete_elem = fd_array_map_delete_elem,
|
||||
@@ -1137,6 +1202,7 @@ static void cgroup_fd_array_free(struct bpf_map *map)
|
||||
|
||||
static int cgroup_array_map_btf_id;
|
||||
const struct bpf_map_ops cgroup_array_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc_check = fd_array_map_alloc_check,
|
||||
.map_alloc = array_map_alloc,
|
||||
.map_free = cgroup_fd_array_free,
|
||||
@@ -1190,7 +1256,7 @@ static void *array_of_map_lookup_elem(struct bpf_map *map, void *key)
|
||||
return READ_ONCE(*inner_map);
|
||||
}
|
||||
|
||||
static u32 array_of_map_gen_lookup(struct bpf_map *map,
|
||||
static int array_of_map_gen_lookup(struct bpf_map *map,
|
||||
struct bpf_insn *insn_buf)
|
||||
{
|
||||
struct bpf_array *array = container_of(map, struct bpf_array, map);
|
||||
|
272
kernel/bpf/bpf_inode_storage.c
Normal file
272
kernel/bpf/bpf_inode_storage.c
Normal file
@@ -0,0 +1,272 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Copyright (c) 2019 Facebook
|
||||
* Copyright 2020 Google LLC.
|
||||
*/
|
||||
|
||||
#include <linux/rculist.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/hash.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/bpf.h>
|
||||
#include <linux/bpf_local_storage.h>
|
||||
#include <net/sock.h>
|
||||
#include <uapi/linux/sock_diag.h>
|
||||
#include <uapi/linux/btf.h>
|
||||
#include <linux/bpf_lsm.h>
|
||||
#include <linux/btf_ids.h>
|
||||
#include <linux/fdtable.h>
|
||||
|
||||
DEFINE_BPF_STORAGE_CACHE(inode_cache);
|
||||
|
||||
static struct bpf_local_storage __rcu **
|
||||
inode_storage_ptr(void *owner)
|
||||
{
|
||||
struct inode *inode = owner;
|
||||
struct bpf_storage_blob *bsb;
|
||||
|
||||
bsb = bpf_inode(inode);
|
||||
if (!bsb)
|
||||
return NULL;
|
||||
return &bsb->storage;
|
||||
}
|
||||
|
||||
static struct bpf_local_storage_data *inode_storage_lookup(struct inode *inode,
|
||||
struct bpf_map *map,
|
||||
bool cacheit_lockit)
|
||||
{
|
||||
struct bpf_local_storage *inode_storage;
|
||||
struct bpf_local_storage_map *smap;
|
||||
struct bpf_storage_blob *bsb;
|
||||
|
||||
bsb = bpf_inode(inode);
|
||||
if (!bsb)
|
||||
return NULL;
|
||||
|
||||
inode_storage = rcu_dereference(bsb->storage);
|
||||
if (!inode_storage)
|
||||
return NULL;
|
||||
|
||||
smap = (struct bpf_local_storage_map *)map;
|
||||
return bpf_local_storage_lookup(inode_storage, smap, cacheit_lockit);
|
||||
}
|
||||
|
||||
void bpf_inode_storage_free(struct inode *inode)
|
||||
{
|
||||
struct bpf_local_storage_elem *selem;
|
||||
struct bpf_local_storage *local_storage;
|
||||
bool free_inode_storage = false;
|
||||
struct bpf_storage_blob *bsb;
|
||||
struct hlist_node *n;
|
||||
|
||||
bsb = bpf_inode(inode);
|
||||
if (!bsb)
|
||||
return;
|
||||
|
||||
rcu_read_lock();
|
||||
|
||||
local_storage = rcu_dereference(bsb->storage);
|
||||
if (!local_storage) {
|
||||
rcu_read_unlock();
|
||||
return;
|
||||
}
|
||||
|
||||
/* Netiher the bpf_prog nor the bpf-map's syscall
|
||||
* could be modifying the local_storage->list now.
|
||||
* Thus, no elem can be added-to or deleted-from the
|
||||
* local_storage->list by the bpf_prog or by the bpf-map's syscall.
|
||||
*
|
||||
* It is racing with bpf_local_storage_map_free() alone
|
||||
* when unlinking elem from the local_storage->list and
|
||||
* the map's bucket->list.
|
||||
*/
|
||||
raw_spin_lock_bh(&local_storage->lock);
|
||||
hlist_for_each_entry_safe(selem, n, &local_storage->list, snode) {
|
||||
/* Always unlink from map before unlinking from
|
||||
* local_storage.
|
||||
*/
|
||||
bpf_selem_unlink_map(selem);
|
||||
free_inode_storage = bpf_selem_unlink_storage_nolock(
|
||||
local_storage, selem, false);
|
||||
}
|
||||
raw_spin_unlock_bh(&local_storage->lock);
|
||||
rcu_read_unlock();
|
||||
|
||||
/* free_inoode_storage should always be true as long as
|
||||
* local_storage->list was non-empty.
|
||||
*/
|
||||
if (free_inode_storage)
|
||||
kfree_rcu(local_storage, rcu);
|
||||
}
|
||||
|
||||
static void *bpf_fd_inode_storage_lookup_elem(struct bpf_map *map, void *key)
|
||||
{
|
||||
struct bpf_local_storage_data *sdata;
|
||||
struct file *f;
|
||||
int fd;
|
||||
|
||||
fd = *(int *)key;
|
||||
f = fget_raw(fd);
|
||||
if (!f)
|
||||
return NULL;
|
||||
|
||||
sdata = inode_storage_lookup(f->f_inode, map, true);
|
||||
fput(f);
|
||||
return sdata ? sdata->data : NULL;
|
||||
}
|
||||
|
||||
static int bpf_fd_inode_storage_update_elem(struct bpf_map *map, void *key,
|
||||
void *value, u64 map_flags)
|
||||
{
|
||||
struct bpf_local_storage_data *sdata;
|
||||
struct file *f;
|
||||
int fd;
|
||||
|
||||
fd = *(int *)key;
|
||||
f = fget_raw(fd);
|
||||
if (!f || !inode_storage_ptr(f->f_inode))
|
||||
return -EBADF;
|
||||
|
||||
sdata = bpf_local_storage_update(f->f_inode,
|
||||
(struct bpf_local_storage_map *)map,
|
||||
value, map_flags);
|
||||
fput(f);
|
||||
return PTR_ERR_OR_ZERO(sdata);
|
||||
}
|
||||
|
||||
static int inode_storage_delete(struct inode *inode, struct bpf_map *map)
|
||||
{
|
||||
struct bpf_local_storage_data *sdata;
|
||||
|
||||
sdata = inode_storage_lookup(inode, map, false);
|
||||
if (!sdata)
|
||||
return -ENOENT;
|
||||
|
||||
bpf_selem_unlink(SELEM(sdata));
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int bpf_fd_inode_storage_delete_elem(struct bpf_map *map, void *key)
|
||||
{
|
||||
struct file *f;
|
||||
int fd, err;
|
||||
|
||||
fd = *(int *)key;
|
||||
f = fget_raw(fd);
|
||||
if (!f)
|
||||
return -EBADF;
|
||||
|
||||
err = inode_storage_delete(f->f_inode, map);
|
||||
fput(f);
|
||||
return err;
|
||||
}
|
||||
|
||||
BPF_CALL_4(bpf_inode_storage_get, struct bpf_map *, map, struct inode *, inode,
|
||||
void *, value, u64, flags)
|
||||
{
|
||||
struct bpf_local_storage_data *sdata;
|
||||
|
||||
if (flags & ~(BPF_LOCAL_STORAGE_GET_F_CREATE))
|
||||
return (unsigned long)NULL;
|
||||
|
||||
/* explicitly check that the inode_storage_ptr is not
|
||||
* NULL as inode_storage_lookup returns NULL in this case and
|
||||
* bpf_local_storage_update expects the owner to have a
|
||||
* valid storage pointer.
|
||||
*/
|
||||
if (!inode_storage_ptr(inode))
|
||||
return (unsigned long)NULL;
|
||||
|
||||
sdata = inode_storage_lookup(inode, map, true);
|
||||
if (sdata)
|
||||
return (unsigned long)sdata->data;
|
||||
|
||||
/* This helper must only called from where the inode is gurranteed
|
||||
* to have a refcount and cannot be freed.
|
||||
*/
|
||||
if (flags & BPF_LOCAL_STORAGE_GET_F_CREATE) {
|
||||
sdata = bpf_local_storage_update(
|
||||
inode, (struct bpf_local_storage_map *)map, value,
|
||||
BPF_NOEXIST);
|
||||
return IS_ERR(sdata) ? (unsigned long)NULL :
|
||||
(unsigned long)sdata->data;
|
||||
}
|
||||
|
||||
return (unsigned long)NULL;
|
||||
}
|
||||
|
||||
BPF_CALL_2(bpf_inode_storage_delete,
|
||||
struct bpf_map *, map, struct inode *, inode)
|
||||
{
|
||||
/* This helper must only called from where the inode is gurranteed
|
||||
* to have a refcount and cannot be freed.
|
||||
*/
|
||||
return inode_storage_delete(inode, map);
|
||||
}
|
||||
|
||||
static int notsupp_get_next_key(struct bpf_map *map, void *key,
|
||||
void *next_key)
|
||||
{
|
||||
return -ENOTSUPP;
|
||||
}
|
||||
|
||||
static struct bpf_map *inode_storage_map_alloc(union bpf_attr *attr)
|
||||
{
|
||||
struct bpf_local_storage_map *smap;
|
||||
|
||||
smap = bpf_local_storage_map_alloc(attr);
|
||||
if (IS_ERR(smap))
|
||||
return ERR_CAST(smap);
|
||||
|
||||
smap->cache_idx = bpf_local_storage_cache_idx_get(&inode_cache);
|
||||
return &smap->map;
|
||||
}
|
||||
|
||||
static void inode_storage_map_free(struct bpf_map *map)
|
||||
{
|
||||
struct bpf_local_storage_map *smap;
|
||||
|
||||
smap = (struct bpf_local_storage_map *)map;
|
||||
bpf_local_storage_cache_idx_free(&inode_cache, smap->cache_idx);
|
||||
bpf_local_storage_map_free(smap);
|
||||
}
|
||||
|
||||
static int inode_storage_map_btf_id;
|
||||
const struct bpf_map_ops inode_storage_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc_check = bpf_local_storage_map_alloc_check,
|
||||
.map_alloc = inode_storage_map_alloc,
|
||||
.map_free = inode_storage_map_free,
|
||||
.map_get_next_key = notsupp_get_next_key,
|
||||
.map_lookup_elem = bpf_fd_inode_storage_lookup_elem,
|
||||
.map_update_elem = bpf_fd_inode_storage_update_elem,
|
||||
.map_delete_elem = bpf_fd_inode_storage_delete_elem,
|
||||
.map_check_btf = bpf_local_storage_map_check_btf,
|
||||
.map_btf_name = "bpf_local_storage_map",
|
||||
.map_btf_id = &inode_storage_map_btf_id,
|
||||
.map_owner_storage_ptr = inode_storage_ptr,
|
||||
};
|
||||
|
||||
BTF_ID_LIST_SINGLE(bpf_inode_storage_btf_ids, struct, inode)
|
||||
|
||||
const struct bpf_func_proto bpf_inode_storage_get_proto = {
|
||||
.func = bpf_inode_storage_get,
|
||||
.gpl_only = false,
|
||||
.ret_type = RET_PTR_TO_MAP_VALUE_OR_NULL,
|
||||
.arg1_type = ARG_CONST_MAP_PTR,
|
||||
.arg2_type = ARG_PTR_TO_BTF_ID,
|
||||
.arg2_btf_id = &bpf_inode_storage_btf_ids[0],
|
||||
.arg3_type = ARG_PTR_TO_MAP_VALUE_OR_NULL,
|
||||
.arg4_type = ARG_ANYTHING,
|
||||
};
|
||||
|
||||
const struct bpf_func_proto bpf_inode_storage_delete_proto = {
|
||||
.func = bpf_inode_storage_delete,
|
||||
.gpl_only = false,
|
||||
.ret_type = RET_INTEGER,
|
||||
.arg1_type = ARG_CONST_MAP_PTR,
|
||||
.arg2_type = ARG_PTR_TO_BTF_ID,
|
||||
.arg2_btf_id = &bpf_inode_storage_btf_ids[0],
|
||||
};
|
@@ -88,8 +88,8 @@ static ssize_t bpf_seq_read(struct file *file, char __user *buf, size_t size,
|
||||
mutex_lock(&seq->lock);
|
||||
|
||||
if (!seq->buf) {
|
||||
seq->size = PAGE_SIZE;
|
||||
seq->buf = kmalloc(seq->size, GFP_KERNEL);
|
||||
seq->size = PAGE_SIZE << 3;
|
||||
seq->buf = kvmalloc(seq->size, GFP_KERNEL);
|
||||
if (!seq->buf) {
|
||||
err = -ENOMEM;
|
||||
goto done;
|
||||
@@ -390,10 +390,68 @@ out_unlock:
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void bpf_iter_link_show_fdinfo(const struct bpf_link *link,
|
||||
struct seq_file *seq)
|
||||
{
|
||||
struct bpf_iter_link *iter_link =
|
||||
container_of(link, struct bpf_iter_link, link);
|
||||
bpf_iter_show_fdinfo_t show_fdinfo;
|
||||
|
||||
seq_printf(seq,
|
||||
"target_name:\t%s\n",
|
||||
iter_link->tinfo->reg_info->target);
|
||||
|
||||
show_fdinfo = iter_link->tinfo->reg_info->show_fdinfo;
|
||||
if (show_fdinfo)
|
||||
show_fdinfo(&iter_link->aux, seq);
|
||||
}
|
||||
|
||||
static int bpf_iter_link_fill_link_info(const struct bpf_link *link,
|
||||
struct bpf_link_info *info)
|
||||
{
|
||||
struct bpf_iter_link *iter_link =
|
||||
container_of(link, struct bpf_iter_link, link);
|
||||
char __user *ubuf = u64_to_user_ptr(info->iter.target_name);
|
||||
bpf_iter_fill_link_info_t fill_link_info;
|
||||
u32 ulen = info->iter.target_name_len;
|
||||
const char *target_name;
|
||||
u32 target_len;
|
||||
|
||||
if (!ulen ^ !ubuf)
|
||||
return -EINVAL;
|
||||
|
||||
target_name = iter_link->tinfo->reg_info->target;
|
||||
target_len = strlen(target_name);
|
||||
info->iter.target_name_len = target_len + 1;
|
||||
|
||||
if (ubuf) {
|
||||
if (ulen >= target_len + 1) {
|
||||
if (copy_to_user(ubuf, target_name, target_len + 1))
|
||||
return -EFAULT;
|
||||
} else {
|
||||
char zero = '\0';
|
||||
|
||||
if (copy_to_user(ubuf, target_name, ulen - 1))
|
||||
return -EFAULT;
|
||||
if (put_user(zero, ubuf + ulen - 1))
|
||||
return -EFAULT;
|
||||
return -ENOSPC;
|
||||
}
|
||||
}
|
||||
|
||||
fill_link_info = iter_link->tinfo->reg_info->fill_link_info;
|
||||
if (fill_link_info)
|
||||
return fill_link_info(&iter_link->aux, info);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct bpf_link_ops bpf_iter_link_lops = {
|
||||
.release = bpf_iter_link_release,
|
||||
.dealloc = bpf_iter_link_dealloc,
|
||||
.update_prog = bpf_iter_link_replace,
|
||||
.show_fdinfo = bpf_iter_link_show_fdinfo,
|
||||
.fill_link_info = bpf_iter_link_fill_link_info,
|
||||
};
|
||||
|
||||
bool bpf_link_is_iter(struct bpf_link *link)
|
||||
|
600
kernel/bpf/bpf_local_storage.c
Normal file
600
kernel/bpf/bpf_local_storage.c
Normal file
@@ -0,0 +1,600 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright (c) 2019 Facebook */
|
||||
#include <linux/rculist.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/hash.h>
|
||||
#include <linux/types.h>
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/bpf.h>
|
||||
#include <linux/btf_ids.h>
|
||||
#include <linux/bpf_local_storage.h>
|
||||
#include <net/sock.h>
|
||||
#include <uapi/linux/sock_diag.h>
|
||||
#include <uapi/linux/btf.h>
|
||||
|
||||
#define BPF_LOCAL_STORAGE_CREATE_FLAG_MASK (BPF_F_NO_PREALLOC | BPF_F_CLONE)
|
||||
|
||||
static struct bpf_local_storage_map_bucket *
|
||||
select_bucket(struct bpf_local_storage_map *smap,
|
||||
struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
return &smap->buckets[hash_ptr(selem, smap->bucket_log)];
|
||||
}
|
||||
|
||||
static int mem_charge(struct bpf_local_storage_map *smap, void *owner, u32 size)
|
||||
{
|
||||
struct bpf_map *map = &smap->map;
|
||||
|
||||
if (!map->ops->map_local_storage_charge)
|
||||
return 0;
|
||||
|
||||
return map->ops->map_local_storage_charge(smap, owner, size);
|
||||
}
|
||||
|
||||
static void mem_uncharge(struct bpf_local_storage_map *smap, void *owner,
|
||||
u32 size)
|
||||
{
|
||||
struct bpf_map *map = &smap->map;
|
||||
|
||||
if (map->ops->map_local_storage_uncharge)
|
||||
map->ops->map_local_storage_uncharge(smap, owner, size);
|
||||
}
|
||||
|
||||
static struct bpf_local_storage __rcu **
|
||||
owner_storage(struct bpf_local_storage_map *smap, void *owner)
|
||||
{
|
||||
struct bpf_map *map = &smap->map;
|
||||
|
||||
return map->ops->map_owner_storage_ptr(owner);
|
||||
}
|
||||
|
||||
static bool selem_linked_to_storage(const struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
return !hlist_unhashed(&selem->snode);
|
||||
}
|
||||
|
||||
static bool selem_linked_to_map(const struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
return !hlist_unhashed(&selem->map_node);
|
||||
}
|
||||
|
||||
struct bpf_local_storage_elem *
|
||||
bpf_selem_alloc(struct bpf_local_storage_map *smap, void *owner,
|
||||
void *value, bool charge_mem)
|
||||
{
|
||||
struct bpf_local_storage_elem *selem;
|
||||
|
||||
if (charge_mem && mem_charge(smap, owner, smap->elem_size))
|
||||
return NULL;
|
||||
|
||||
selem = kzalloc(smap->elem_size, GFP_ATOMIC | __GFP_NOWARN);
|
||||
if (selem) {
|
||||
if (value)
|
||||
memcpy(SDATA(selem)->data, value, smap->map.value_size);
|
||||
return selem;
|
||||
}
|
||||
|
||||
if (charge_mem)
|
||||
mem_uncharge(smap, owner, smap->elem_size);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
/* local_storage->lock must be held and selem->local_storage == local_storage.
|
||||
* The caller must ensure selem->smap is still valid to be
|
||||
* dereferenced for its smap->elem_size and smap->cache_idx.
|
||||
*/
|
||||
bool bpf_selem_unlink_storage_nolock(struct bpf_local_storage *local_storage,
|
||||
struct bpf_local_storage_elem *selem,
|
||||
bool uncharge_mem)
|
||||
{
|
||||
struct bpf_local_storage_map *smap;
|
||||
bool free_local_storage;
|
||||
void *owner;
|
||||
|
||||
smap = rcu_dereference(SDATA(selem)->smap);
|
||||
owner = local_storage->owner;
|
||||
|
||||
/* All uncharging on the owner must be done first.
|
||||
* The owner may be freed once the last selem is unlinked
|
||||
* from local_storage.
|
||||
*/
|
||||
if (uncharge_mem)
|
||||
mem_uncharge(smap, owner, smap->elem_size);
|
||||
|
||||
free_local_storage = hlist_is_singular_node(&selem->snode,
|
||||
&local_storage->list);
|
||||
if (free_local_storage) {
|
||||
mem_uncharge(smap, owner, sizeof(struct bpf_local_storage));
|
||||
local_storage->owner = NULL;
|
||||
|
||||
/* After this RCU_INIT, owner may be freed and cannot be used */
|
||||
RCU_INIT_POINTER(*owner_storage(smap, owner), NULL);
|
||||
|
||||
/* local_storage is not freed now. local_storage->lock is
|
||||
* still held and raw_spin_unlock_bh(&local_storage->lock)
|
||||
* will be done by the caller.
|
||||
*
|
||||
* Although the unlock will be done under
|
||||
* rcu_read_lock(), it is more intutivie to
|
||||
* read if kfree_rcu(local_storage, rcu) is done
|
||||
* after the raw_spin_unlock_bh(&local_storage->lock).
|
||||
*
|
||||
* Hence, a "bool free_local_storage" is returned
|
||||
* to the caller which then calls the kfree_rcu()
|
||||
* after unlock.
|
||||
*/
|
||||
}
|
||||
hlist_del_init_rcu(&selem->snode);
|
||||
if (rcu_access_pointer(local_storage->cache[smap->cache_idx]) ==
|
||||
SDATA(selem))
|
||||
RCU_INIT_POINTER(local_storage->cache[smap->cache_idx], NULL);
|
||||
|
||||
kfree_rcu(selem, rcu);
|
||||
|
||||
return free_local_storage;
|
||||
}
|
||||
|
||||
static void __bpf_selem_unlink_storage(struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
struct bpf_local_storage *local_storage;
|
||||
bool free_local_storage = false;
|
||||
|
||||
if (unlikely(!selem_linked_to_storage(selem)))
|
||||
/* selem has already been unlinked from sk */
|
||||
return;
|
||||
|
||||
local_storage = rcu_dereference(selem->local_storage);
|
||||
raw_spin_lock_bh(&local_storage->lock);
|
||||
if (likely(selem_linked_to_storage(selem)))
|
||||
free_local_storage = bpf_selem_unlink_storage_nolock(
|
||||
local_storage, selem, true);
|
||||
raw_spin_unlock_bh(&local_storage->lock);
|
||||
|
||||
if (free_local_storage)
|
||||
kfree_rcu(local_storage, rcu);
|
||||
}
|
||||
|
||||
void bpf_selem_link_storage_nolock(struct bpf_local_storage *local_storage,
|
||||
struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
RCU_INIT_POINTER(selem->local_storage, local_storage);
|
||||
hlist_add_head_rcu(&selem->snode, &local_storage->list);
|
||||
}
|
||||
|
||||
void bpf_selem_unlink_map(struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
struct bpf_local_storage_map *smap;
|
||||
struct bpf_local_storage_map_bucket *b;
|
||||
|
||||
if (unlikely(!selem_linked_to_map(selem)))
|
||||
/* selem has already be unlinked from smap */
|
||||
return;
|
||||
|
||||
smap = rcu_dereference(SDATA(selem)->smap);
|
||||
b = select_bucket(smap, selem);
|
||||
raw_spin_lock_bh(&b->lock);
|
||||
if (likely(selem_linked_to_map(selem)))
|
||||
hlist_del_init_rcu(&selem->map_node);
|
||||
raw_spin_unlock_bh(&b->lock);
|
||||
}
|
||||
|
||||
void bpf_selem_link_map(struct bpf_local_storage_map *smap,
|
||||
struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
struct bpf_local_storage_map_bucket *b = select_bucket(smap, selem);
|
||||
|
||||
raw_spin_lock_bh(&b->lock);
|
||||
RCU_INIT_POINTER(SDATA(selem)->smap, smap);
|
||||
hlist_add_head_rcu(&selem->map_node, &b->list);
|
||||
raw_spin_unlock_bh(&b->lock);
|
||||
}
|
||||
|
||||
void bpf_selem_unlink(struct bpf_local_storage_elem *selem)
|
||||
{
|
||||
/* Always unlink from map before unlinking from local_storage
|
||||
* because selem will be freed after successfully unlinked from
|
||||
* the local_storage.
|
||||
*/
|
||||
bpf_selem_unlink_map(selem);
|
||||
__bpf_selem_unlink_storage(selem);
|
||||
}
|
||||
|
||||
struct bpf_local_storage_data *
|
||||
bpf_local_storage_lookup(struct bpf_local_storage *local_storage,
|
||||
struct bpf_local_storage_map *smap,
|
||||
bool cacheit_lockit)
|
||||
{
|
||||
struct bpf_local_storage_data *sdata;
|
||||
struct bpf_local_storage_elem *selem;
|
||||
|
||||
/* Fast path (cache hit) */
|
||||
sdata = rcu_dereference(local_storage->cache[smap->cache_idx]);
|
||||
if (sdata && rcu_access_pointer(sdata->smap) == smap)
|
||||
return sdata;
|
||||
|
||||
/* Slow path (cache miss) */
|
||||
hlist_for_each_entry_rcu(selem, &local_storage->list, snode)
|
||||
if (rcu_access_pointer(SDATA(selem)->smap) == smap)
|
||||
break;
|
||||
|
||||
if (!selem)
|
||||
return NULL;
|
||||
|
||||
sdata = SDATA(selem);
|
||||
if (cacheit_lockit) {
|
||||
/* spinlock is needed to avoid racing with the
|
||||
* parallel delete. Otherwise, publishing an already
|
||||
* deleted sdata to the cache will become a use-after-free
|
||||
* problem in the next bpf_local_storage_lookup().
|
||||
*/
|
||||
raw_spin_lock_bh(&local_storage->lock);
|
||||
if (selem_linked_to_storage(selem))
|
||||
rcu_assign_pointer(local_storage->cache[smap->cache_idx],
|
||||
sdata);
|
||||
raw_spin_unlock_bh(&local_storage->lock);
|
||||
}
|
||||
|
||||
return sdata;
|
||||
}
|
||||
|
||||
static int check_flags(const struct bpf_local_storage_data *old_sdata,
|
||||
u64 map_flags)
|
||||
{
|
||||
if (old_sdata && (map_flags & ~BPF_F_LOCK) == BPF_NOEXIST)
|
||||
/* elem already exists */
|
||||
return -EEXIST;
|
||||
|
||||
if (!old_sdata && (map_flags & ~BPF_F_LOCK) == BPF_EXIST)
|
||||
/* elem doesn't exist, cannot update it */
|
||||
return -ENOENT;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int bpf_local_storage_alloc(void *owner,
|
||||
struct bpf_local_storage_map *smap,
|
||||
struct bpf_local_storage_elem *first_selem)
|
||||
{
|
||||
struct bpf_local_storage *prev_storage, *storage;
|
||||
struct bpf_local_storage **owner_storage_ptr;
|
||||
int err;
|
||||
|
||||
err = mem_charge(smap, owner, sizeof(*storage));
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
storage = kzalloc(sizeof(*storage), GFP_ATOMIC | __GFP_NOWARN);
|
||||
if (!storage) {
|
||||
err = -ENOMEM;
|
||||
goto uncharge;
|
||||
}
|
||||
|
||||
INIT_HLIST_HEAD(&storage->list);
|
||||
raw_spin_lock_init(&storage->lock);
|
||||
storage->owner = owner;
|
||||
|
||||
bpf_selem_link_storage_nolock(storage, first_selem);
|
||||
bpf_selem_link_map(smap, first_selem);
|
||||
|
||||
owner_storage_ptr =
|
||||
(struct bpf_local_storage **)owner_storage(smap, owner);
|
||||
/* Publish storage to the owner.
|
||||
* Instead of using any lock of the kernel object (i.e. owner),
|
||||
* cmpxchg will work with any kernel object regardless what
|
||||
* the running context is, bh, irq...etc.
|
||||
*
|
||||
* From now on, the owner->storage pointer (e.g. sk->sk_bpf_storage)
|
||||
* is protected by the storage->lock. Hence, when freeing
|
||||
* the owner->storage, the storage->lock must be held before
|
||||
* setting owner->storage ptr to NULL.
|
||||
*/
|
||||
prev_storage = cmpxchg(owner_storage_ptr, NULL, storage);
|
||||
if (unlikely(prev_storage)) {
|
||||
bpf_selem_unlink_map(first_selem);
|
||||
err = -EAGAIN;
|
||||
goto uncharge;
|
||||
|
||||
/* Note that even first_selem was linked to smap's
|
||||
* bucket->list, first_selem can be freed immediately
|
||||
* (instead of kfree_rcu) because
|
||||
* bpf_local_storage_map_free() does a
|
||||
* synchronize_rcu() before walking the bucket->list.
|
||||
* Hence, no one is accessing selem from the
|
||||
* bucket->list under rcu_read_lock().
|
||||
*/
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
uncharge:
|
||||
kfree(storage);
|
||||
mem_uncharge(smap, owner, sizeof(*storage));
|
||||
return err;
|
||||
}
|
||||
|
||||
/* sk cannot be going away because it is linking new elem
|
||||
* to sk->sk_bpf_storage. (i.e. sk->sk_refcnt cannot be 0).
|
||||
* Otherwise, it will become a leak (and other memory issues
|
||||
* during map destruction).
|
||||
*/
|
||||
struct bpf_local_storage_data *
|
||||
bpf_local_storage_update(void *owner, struct bpf_local_storage_map *smap,
|
||||
void *value, u64 map_flags)
|
||||
{
|
||||
struct bpf_local_storage_data *old_sdata = NULL;
|
||||
struct bpf_local_storage_elem *selem;
|
||||
struct bpf_local_storage *local_storage;
|
||||
int err;
|
||||
|
||||
/* BPF_EXIST and BPF_NOEXIST cannot be both set */
|
||||
if (unlikely((map_flags & ~BPF_F_LOCK) > BPF_EXIST) ||
|
||||
/* BPF_F_LOCK can only be used in a value with spin_lock */
|
||||
unlikely((map_flags & BPF_F_LOCK) &&
|
||||
!map_value_has_spin_lock(&smap->map)))
|
||||
return ERR_PTR(-EINVAL);
|
||||
|
||||
local_storage = rcu_dereference(*owner_storage(smap, owner));
|
||||
if (!local_storage || hlist_empty(&local_storage->list)) {
|
||||
/* Very first elem for the owner */
|
||||
err = check_flags(NULL, map_flags);
|
||||
if (err)
|
||||
return ERR_PTR(err);
|
||||
|
||||
selem = bpf_selem_alloc(smap, owner, value, true);
|
||||
if (!selem)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
|
||||
err = bpf_local_storage_alloc(owner, smap, selem);
|
||||
if (err) {
|
||||
kfree(selem);
|
||||
mem_uncharge(smap, owner, smap->elem_size);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
return SDATA(selem);
|
||||
}
|
||||
|
||||
if ((map_flags & BPF_F_LOCK) && !(map_flags & BPF_NOEXIST)) {
|
||||
/* Hoping to find an old_sdata to do inline update
|
||||
* such that it can avoid taking the local_storage->lock
|
||||
* and changing the lists.
|
||||
*/
|
||||
old_sdata =
|
||||
bpf_local_storage_lookup(local_storage, smap, false);
|
||||
err = check_flags(old_sdata, map_flags);
|
||||
if (err)
|
||||
return ERR_PTR(err);
|
||||
if (old_sdata && selem_linked_to_storage(SELEM(old_sdata))) {
|
||||
copy_map_value_locked(&smap->map, old_sdata->data,
|
||||
value, false);
|
||||
return old_sdata;
|
||||
}
|
||||
}
|
||||
|
||||
raw_spin_lock_bh(&local_storage->lock);
|
||||
|
||||
/* Recheck local_storage->list under local_storage->lock */
|
||||
if (unlikely(hlist_empty(&local_storage->list))) {
|
||||
/* A parallel del is happening and local_storage is going
|
||||
* away. It has just been checked before, so very
|
||||
* unlikely. Return instead of retry to keep things
|
||||
* simple.
|
||||
*/
|
||||
err = -EAGAIN;
|
||||
goto unlock_err;
|
||||
}
|
||||
|
||||
old_sdata = bpf_local_storage_lookup(local_storage, smap, false);
|
||||
err = check_flags(old_sdata, map_flags);
|
||||
if (err)
|
||||
goto unlock_err;
|
||||
|
||||
if (old_sdata && (map_flags & BPF_F_LOCK)) {
|
||||
copy_map_value_locked(&smap->map, old_sdata->data, value,
|
||||
false);
|
||||
selem = SELEM(old_sdata);
|
||||
goto unlock;
|
||||
}
|
||||
|
||||
/* local_storage->lock is held. Hence, we are sure
|
||||
* we can unlink and uncharge the old_sdata successfully
|
||||
* later. Hence, instead of charging the new selem now
|
||||
* and then uncharge the old selem later (which may cause
|
||||
* a potential but unnecessary charge failure), avoid taking
|
||||
* a charge at all here (the "!old_sdata" check) and the
|
||||
* old_sdata will not be uncharged later during
|
||||
* bpf_selem_unlink_storage_nolock().
|
||||
*/
|
||||
selem = bpf_selem_alloc(smap, owner, value, !old_sdata);
|
||||
if (!selem) {
|
||||
err = -ENOMEM;
|
||||
goto unlock_err;
|
||||
}
|
||||
|
||||
/* First, link the new selem to the map */
|
||||
bpf_selem_link_map(smap, selem);
|
||||
|
||||
/* Second, link (and publish) the new selem to local_storage */
|
||||
bpf_selem_link_storage_nolock(local_storage, selem);
|
||||
|
||||
/* Third, remove old selem, SELEM(old_sdata) */
|
||||
if (old_sdata) {
|
||||
bpf_selem_unlink_map(SELEM(old_sdata));
|
||||
bpf_selem_unlink_storage_nolock(local_storage, SELEM(old_sdata),
|
||||
false);
|
||||
}
|
||||
|
||||
unlock:
|
||||
raw_spin_unlock_bh(&local_storage->lock);
|
||||
return SDATA(selem);
|
||||
|
||||
unlock_err:
|
||||
raw_spin_unlock_bh(&local_storage->lock);
|
||||
return ERR_PTR(err);
|
||||
}
|
||||
|
||||
u16 bpf_local_storage_cache_idx_get(struct bpf_local_storage_cache *cache)
|
||||
{
|
||||
u64 min_usage = U64_MAX;
|
||||
u16 i, res = 0;
|
||||
|
||||
spin_lock(&cache->idx_lock);
|
||||
|
||||
for (i = 0; i < BPF_LOCAL_STORAGE_CACHE_SIZE; i++) {
|
||||
if (cache->idx_usage_counts[i] < min_usage) {
|
||||
min_usage = cache->idx_usage_counts[i];
|
||||
res = i;
|
||||
|
||||
/* Found a free cache_idx */
|
||||
if (!min_usage)
|
||||
break;
|
||||
}
|
||||
}
|
||||
cache->idx_usage_counts[res]++;
|
||||
|
||||
spin_unlock(&cache->idx_lock);
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
void bpf_local_storage_cache_idx_free(struct bpf_local_storage_cache *cache,
|
||||
u16 idx)
|
||||
{
|
||||
spin_lock(&cache->idx_lock);
|
||||
cache->idx_usage_counts[idx]--;
|
||||
spin_unlock(&cache->idx_lock);
|
||||
}
|
||||
|
||||
void bpf_local_storage_map_free(struct bpf_local_storage_map *smap)
|
||||
{
|
||||
struct bpf_local_storage_elem *selem;
|
||||
struct bpf_local_storage_map_bucket *b;
|
||||
unsigned int i;
|
||||
|
||||
/* Note that this map might be concurrently cloned from
|
||||
* bpf_sk_storage_clone. Wait for any existing bpf_sk_storage_clone
|
||||
* RCU read section to finish before proceeding. New RCU
|
||||
* read sections should be prevented via bpf_map_inc_not_zero.
|
||||
*/
|
||||
synchronize_rcu();
|
||||
|
||||
/* bpf prog and the userspace can no longer access this map
|
||||
* now. No new selem (of this map) can be added
|
||||
* to the owner->storage or to the map bucket's list.
|
||||
*
|
||||
* The elem of this map can be cleaned up here
|
||||
* or when the storage is freed e.g.
|
||||
* by bpf_sk_storage_free() during __sk_destruct().
|
||||
*/
|
||||
for (i = 0; i < (1U << smap->bucket_log); i++) {
|
||||
b = &smap->buckets[i];
|
||||
|
||||
rcu_read_lock();
|
||||
/* No one is adding to b->list now */
|
||||
while ((selem = hlist_entry_safe(
|
||||
rcu_dereference_raw(hlist_first_rcu(&b->list)),
|
||||
struct bpf_local_storage_elem, map_node))) {
|
||||
bpf_selem_unlink(selem);
|
||||
cond_resched_rcu();
|
||||
}
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
/* While freeing the storage we may still need to access the map.
|
||||
*
|
||||
* e.g. when bpf_sk_storage_free() has unlinked selem from the map
|
||||
* which then made the above while((selem = ...)) loop
|
||||
* exit immediately.
|
||||
*
|
||||
* However, while freeing the storage one still needs to access the
|
||||
* smap->elem_size to do the uncharging in
|
||||
* bpf_selem_unlink_storage_nolock().
|
||||
*
|
||||
* Hence, wait another rcu grace period for the storage to be freed.
|
||||
*/
|
||||
synchronize_rcu();
|
||||
|
||||
kvfree(smap->buckets);
|
||||
kfree(smap);
|
||||
}
|
||||
|
||||
int bpf_local_storage_map_alloc_check(union bpf_attr *attr)
|
||||
{
|
||||
if (attr->map_flags & ~BPF_LOCAL_STORAGE_CREATE_FLAG_MASK ||
|
||||
!(attr->map_flags & BPF_F_NO_PREALLOC) ||
|
||||
attr->max_entries ||
|
||||
attr->key_size != sizeof(int) || !attr->value_size ||
|
||||
/* Enforce BTF for userspace sk dumping */
|
||||
!attr->btf_key_type_id || !attr->btf_value_type_id)
|
||||
return -EINVAL;
|
||||
|
||||
if (!bpf_capable())
|
||||
return -EPERM;
|
||||
|
||||
if (attr->value_size > BPF_LOCAL_STORAGE_MAX_VALUE_SIZE)
|
||||
return -E2BIG;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct bpf_local_storage_map *bpf_local_storage_map_alloc(union bpf_attr *attr)
|
||||
{
|
||||
struct bpf_local_storage_map *smap;
|
||||
unsigned int i;
|
||||
u32 nbuckets;
|
||||
u64 cost;
|
||||
int ret;
|
||||
|
||||
smap = kzalloc(sizeof(*smap), GFP_USER | __GFP_NOWARN);
|
||||
if (!smap)
|
||||
return ERR_PTR(-ENOMEM);
|
||||
bpf_map_init_from_attr(&smap->map, attr);
|
||||
|
||||
nbuckets = roundup_pow_of_two(num_possible_cpus());
|
||||
/* Use at least 2 buckets, select_bucket() is undefined behavior with 1 bucket */
|
||||
nbuckets = max_t(u32, 2, nbuckets);
|
||||
smap->bucket_log = ilog2(nbuckets);
|
||||
cost = sizeof(*smap->buckets) * nbuckets + sizeof(*smap);
|
||||
|
||||
ret = bpf_map_charge_init(&smap->map.memory, cost);
|
||||
if (ret < 0) {
|
||||
kfree(smap);
|
||||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
smap->buckets = kvcalloc(sizeof(*smap->buckets), nbuckets,
|
||||
GFP_USER | __GFP_NOWARN);
|
||||
if (!smap->buckets) {
|
||||
bpf_map_charge_finish(&smap->map.memory);
|
||||
kfree(smap);
|
||||
return ERR_PTR(-ENOMEM);
|
||||
}
|
||||
|
||||
for (i = 0; i < nbuckets; i++) {
|
||||
INIT_HLIST_HEAD(&smap->buckets[i].list);
|
||||
raw_spin_lock_init(&smap->buckets[i].lock);
|
||||
}
|
||||
|
||||
smap->elem_size =
|
||||
sizeof(struct bpf_local_storage_elem) + attr->value_size;
|
||||
|
||||
return smap;
|
||||
}
|
||||
|
||||
int bpf_local_storage_map_check_btf(const struct bpf_map *map,
|
||||
const struct btf *btf,
|
||||
const struct btf_type *key_type,
|
||||
const struct btf_type *value_type)
|
||||
{
|
||||
u32 int_data;
|
||||
|
||||
if (BTF_INFO_KIND(key_type->info) != BTF_KIND_INT)
|
||||
return -EINVAL;
|
||||
|
||||
int_data = *(u32 *)(key_type + 1);
|
||||
if (BTF_INT_BITS(int_data) != 32 || BTF_INT_OFFSET(int_data))
|
||||
return -EINVAL;
|
||||
|
||||
return 0;
|
||||
}
|
@@ -11,6 +11,8 @@
|
||||
#include <linux/bpf_lsm.h>
|
||||
#include <linux/kallsyms.h>
|
||||
#include <linux/bpf_verifier.h>
|
||||
#include <net/bpf_sk_storage.h>
|
||||
#include <linux/bpf_local_storage.h>
|
||||
|
||||
/* For every LSM hook that allows attachment of BPF programs, declare a nop
|
||||
* function where a BPF program can be attached.
|
||||
@@ -45,10 +47,27 @@ int bpf_lsm_verify_prog(struct bpf_verifier_log *vlog,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct bpf_func_proto *
|
||||
bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
|
||||
{
|
||||
switch (func_id) {
|
||||
case BPF_FUNC_inode_storage_get:
|
||||
return &bpf_inode_storage_get_proto;
|
||||
case BPF_FUNC_inode_storage_delete:
|
||||
return &bpf_inode_storage_delete_proto;
|
||||
case BPF_FUNC_sk_storage_get:
|
||||
return &bpf_sk_storage_get_proto;
|
||||
case BPF_FUNC_sk_storage_delete:
|
||||
return &bpf_sk_storage_delete_proto;
|
||||
default:
|
||||
return tracing_prog_func_proto(func_id, prog);
|
||||
}
|
||||
}
|
||||
|
||||
const struct bpf_prog_ops lsm_prog_ops = {
|
||||
};
|
||||
|
||||
const struct bpf_verifier_ops lsm_verifier_ops = {
|
||||
.get_func_proto = tracing_prog_func_proto,
|
||||
.get_func_proto = bpf_lsm_func_proto,
|
||||
.is_valid_access = btf_ctx_access,
|
||||
};
|
||||
|
@@ -298,8 +298,7 @@ static int check_zero_holes(const struct btf_type *t, void *data)
|
||||
return -EINVAL;
|
||||
|
||||
mtype = btf_type_by_id(btf_vmlinux, member->type);
|
||||
mtype = btf_resolve_size(btf_vmlinux, mtype, &msize,
|
||||
NULL, NULL);
|
||||
mtype = btf_resolve_size(btf_vmlinux, mtype, &msize);
|
||||
if (IS_ERR(mtype))
|
||||
return PTR_ERR(mtype);
|
||||
prev_mend = moff + msize;
|
||||
@@ -396,8 +395,7 @@ static int bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
|
||||
u32 msize;
|
||||
|
||||
mtype = btf_type_by_id(btf_vmlinux, member->type);
|
||||
mtype = btf_resolve_size(btf_vmlinux, mtype, &msize,
|
||||
NULL, NULL);
|
||||
mtype = btf_resolve_size(btf_vmlinux, mtype, &msize);
|
||||
if (IS_ERR(mtype)) {
|
||||
err = PTR_ERR(mtype);
|
||||
goto reset_unlock;
|
||||
|
1229
kernel/bpf/btf.c
1229
kernel/bpf/btf.c
File diff suppressed because it is too large
Load Diff
@@ -98,6 +98,8 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag
|
||||
fp->jit_requested = ebpf_jit_enabled();
|
||||
|
||||
INIT_LIST_HEAD_RCU(&fp->aux->ksym.lnode);
|
||||
mutex_init(&fp->aux->used_maps_mutex);
|
||||
mutex_init(&fp->aux->dst_mutex);
|
||||
|
||||
return fp;
|
||||
}
|
||||
@@ -253,6 +255,8 @@ struct bpf_prog *bpf_prog_realloc(struct bpf_prog *fp_old, unsigned int size,
|
||||
void __bpf_prog_free(struct bpf_prog *fp)
|
||||
{
|
||||
if (fp->aux) {
|
||||
mutex_destroy(&fp->aux->used_maps_mutex);
|
||||
mutex_destroy(&fp->aux->dst_mutex);
|
||||
free_percpu(fp->aux->stats);
|
||||
kfree(fp->aux->poke_tab);
|
||||
kfree(fp->aux);
|
||||
@@ -773,7 +777,8 @@ int bpf_jit_add_poke_descriptor(struct bpf_prog *prog,
|
||||
|
||||
if (size > poke_tab_max)
|
||||
return -ENOSPC;
|
||||
if (poke->ip || poke->ip_stable || poke->adj_off)
|
||||
if (poke->tailcall_target || poke->tailcall_target_stable ||
|
||||
poke->tailcall_bypass || poke->adj_off || poke->bypass_addr)
|
||||
return -EINVAL;
|
||||
|
||||
switch (poke->reason) {
|
||||
@@ -1747,8 +1752,9 @@ bool bpf_prog_array_compatible(struct bpf_array *array,
|
||||
static int bpf_check_tail_call(const struct bpf_prog *fp)
|
||||
{
|
||||
struct bpf_prog_aux *aux = fp->aux;
|
||||
int i;
|
||||
int i, ret = 0;
|
||||
|
||||
mutex_lock(&aux->used_maps_mutex);
|
||||
for (i = 0; i < aux->used_map_cnt; i++) {
|
||||
struct bpf_map *map = aux->used_maps[i];
|
||||
struct bpf_array *array;
|
||||
@@ -1757,11 +1763,15 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
|
||||
continue;
|
||||
|
||||
array = container_of(map, struct bpf_array, map);
|
||||
if (!bpf_prog_array_compatible(array, fp))
|
||||
return -EINVAL;
|
||||
if (!bpf_prog_array_compatible(array, fp)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
out:
|
||||
mutex_unlock(&aux->used_maps_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void bpf_prog_select_func(struct bpf_prog *fp)
|
||||
@@ -2130,7 +2140,8 @@ static void bpf_prog_free_deferred(struct work_struct *work)
|
||||
if (aux->prog->has_callchain_buf)
|
||||
put_callchain_buffers();
|
||||
#endif
|
||||
bpf_trampoline_put(aux->trampoline);
|
||||
if (aux->dst_trampoline)
|
||||
bpf_trampoline_put(aux->dst_trampoline);
|
||||
for (i = 0; i < aux->func_cnt; i++)
|
||||
bpf_jit_free(aux->func[i]);
|
||||
if (aux->func_cnt) {
|
||||
@@ -2146,8 +2157,8 @@ void bpf_prog_free(struct bpf_prog *fp)
|
||||
{
|
||||
struct bpf_prog_aux *aux = fp->aux;
|
||||
|
||||
if (aux->linked_prog)
|
||||
bpf_prog_put(aux->linked_prog);
|
||||
if (aux->dst_prog)
|
||||
bpf_prog_put(aux->dst_prog);
|
||||
INIT_WORK(&aux->work, bpf_prog_free_deferred);
|
||||
schedule_work(&aux->work);
|
||||
}
|
||||
@@ -2208,6 +2219,8 @@ const struct bpf_func_proto bpf_get_current_cgroup_id_proto __weak;
|
||||
const struct bpf_func_proto bpf_get_current_ancestor_cgroup_id_proto __weak;
|
||||
const struct bpf_func_proto bpf_get_local_storage_proto __weak;
|
||||
const struct bpf_func_proto bpf_get_ns_current_pid_tgid_proto __weak;
|
||||
const struct bpf_func_proto bpf_snprintf_btf_proto __weak;
|
||||
const struct bpf_func_proto bpf_seq_printf_btf_proto __weak;
|
||||
|
||||
const struct bpf_func_proto * __weak bpf_get_trace_printk_proto(void)
|
||||
{
|
||||
|
@@ -79,8 +79,6 @@ struct bpf_cpu_map {
|
||||
|
||||
static DEFINE_PER_CPU(struct list_head, cpu_map_flush_list);
|
||||
|
||||
static int bq_flush_to_queue(struct xdp_bulk_queue *bq);
|
||||
|
||||
static struct bpf_map *cpu_map_alloc(union bpf_attr *attr)
|
||||
{
|
||||
u32 value_size = attr->value_size;
|
||||
@@ -157,8 +155,7 @@ static void cpu_map_kthread_stop(struct work_struct *work)
|
||||
kthread_stop(rcpu->kthread);
|
||||
}
|
||||
|
||||
static struct sk_buff *cpu_map_build_skb(struct bpf_cpu_map_entry *rcpu,
|
||||
struct xdp_frame *xdpf,
|
||||
static struct sk_buff *cpu_map_build_skb(struct xdp_frame *xdpf,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
unsigned int hard_start_headroom;
|
||||
@@ -367,7 +364,7 @@ static int cpu_map_kthread_run(void *data)
|
||||
struct sk_buff *skb = skbs[i];
|
||||
int ret;
|
||||
|
||||
skb = cpu_map_build_skb(rcpu, xdpf, skb);
|
||||
skb = cpu_map_build_skb(xdpf, skb);
|
||||
if (!skb) {
|
||||
xdp_return_frame(xdpf);
|
||||
continue;
|
||||
@@ -658,6 +655,7 @@ static int cpu_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
|
||||
|
||||
static int cpu_map_btf_id;
|
||||
const struct bpf_map_ops cpu_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc = cpu_map_alloc,
|
||||
.map_free = cpu_map_free,
|
||||
.map_delete_elem = cpu_map_delete_elem,
|
||||
@@ -669,7 +667,7 @@ const struct bpf_map_ops cpu_map_ops = {
|
||||
.map_btf_id = &cpu_map_btf_id,
|
||||
};
|
||||
|
||||
static int bq_flush_to_queue(struct xdp_bulk_queue *bq)
|
||||
static void bq_flush_to_queue(struct xdp_bulk_queue *bq)
|
||||
{
|
||||
struct bpf_cpu_map_entry *rcpu = bq->obj;
|
||||
unsigned int processed = 0, drops = 0;
|
||||
@@ -678,7 +676,7 @@ static int bq_flush_to_queue(struct xdp_bulk_queue *bq)
|
||||
int i;
|
||||
|
||||
if (unlikely(!bq->count))
|
||||
return 0;
|
||||
return;
|
||||
|
||||
q = rcpu->queue;
|
||||
spin_lock(&q->producer_lock);
|
||||
@@ -701,13 +699,12 @@ static int bq_flush_to_queue(struct xdp_bulk_queue *bq)
|
||||
|
||||
/* Feedback loop via tracepoints */
|
||||
trace_xdp_cpumap_enqueue(rcpu->map_id, processed, drops, to_cpu);
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Runs under RCU-read-side, plus in softirq under NAPI protection.
|
||||
* Thus, safe percpu variable access.
|
||||
*/
|
||||
static int bq_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf)
|
||||
static void bq_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf)
|
||||
{
|
||||
struct list_head *flush_list = this_cpu_ptr(&cpu_map_flush_list);
|
||||
struct xdp_bulk_queue *bq = this_cpu_ptr(rcpu->bulkq);
|
||||
@@ -728,8 +725,6 @@ static int bq_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_frame *xdpf)
|
||||
|
||||
if (!bq->flush_node.prev)
|
||||
list_add(&bq->flush_node, flush_list);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int cpu_map_enqueue(struct bpf_cpu_map_entry *rcpu, struct xdp_buff *xdp,
|
||||
|
@@ -341,14 +341,14 @@ bool dev_map_can_have_prog(struct bpf_map *map)
|
||||
return false;
|
||||
}
|
||||
|
||||
static int bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
|
||||
static void bq_xmit_all(struct xdp_dev_bulk_queue *bq, u32 flags)
|
||||
{
|
||||
struct net_device *dev = bq->dev;
|
||||
int sent = 0, drops = 0, err = 0;
|
||||
int i;
|
||||
|
||||
if (unlikely(!bq->count))
|
||||
return 0;
|
||||
return;
|
||||
|
||||
for (i = 0; i < bq->count; i++) {
|
||||
struct xdp_frame *xdpf = bq->q[i];
|
||||
@@ -369,7 +369,7 @@ out:
|
||||
trace_xdp_devmap_xmit(bq->dev_rx, dev, sent, drops, err);
|
||||
bq->dev_rx = NULL;
|
||||
__list_del_clearprev(&bq->flush_node);
|
||||
return 0;
|
||||
return;
|
||||
error:
|
||||
/* If ndo_xdp_xmit fails with an errno, no frames have been
|
||||
* xmit'ed and it's our responsibility to them free all.
|
||||
@@ -421,8 +421,8 @@ struct bpf_dtab_netdev *__dev_map_lookup_elem(struct bpf_map *map, u32 key)
|
||||
/* Runs under RCU-read-side, plus in softirq under NAPI protection.
|
||||
* Thus, safe percpu variable access.
|
||||
*/
|
||||
static int bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
|
||||
struct net_device *dev_rx)
|
||||
static void bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
|
||||
struct net_device *dev_rx)
|
||||
{
|
||||
struct list_head *flush_list = this_cpu_ptr(&dev_flush_list);
|
||||
struct xdp_dev_bulk_queue *bq = this_cpu_ptr(dev->xdp_bulkq);
|
||||
@@ -441,8 +441,6 @@ static int bq_enqueue(struct net_device *dev, struct xdp_frame *xdpf,
|
||||
|
||||
if (!bq->flush_node.prev)
|
||||
list_add(&bq->flush_node, flush_list);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static inline int __xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp,
|
||||
@@ -462,7 +460,8 @@ static inline int __xdp_enqueue(struct net_device *dev, struct xdp_buff *xdp,
|
||||
if (unlikely(!xdpf))
|
||||
return -EOVERFLOW;
|
||||
|
||||
return bq_enqueue(dev, xdpf, dev_rx);
|
||||
bq_enqueue(dev, xdpf, dev_rx);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static struct xdp_buff *dev_map_run_prog(struct net_device *dev,
|
||||
@@ -751,6 +750,7 @@ static int dev_map_hash_update_elem(struct bpf_map *map, void *key, void *value,
|
||||
|
||||
static int dev_map_btf_id;
|
||||
const struct bpf_map_ops dev_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc = dev_map_alloc,
|
||||
.map_free = dev_map_free,
|
||||
.map_get_next_key = dev_map_get_next_key,
|
||||
@@ -764,6 +764,7 @@ const struct bpf_map_ops dev_map_ops = {
|
||||
|
||||
static int dev_map_hash_map_btf_id;
|
||||
const struct bpf_map_ops dev_map_hash_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc = dev_map_alloc,
|
||||
.map_free = dev_map_free,
|
||||
.map_get_next_key = dev_map_hash_get_next_key,
|
||||
|
@@ -9,6 +9,7 @@
|
||||
#include <linux/rculist_nulls.h>
|
||||
#include <linux/random.h>
|
||||
#include <uapi/linux/btf.h>
|
||||
#include <linux/rcupdate_trace.h>
|
||||
#include "percpu_freelist.h"
|
||||
#include "bpf_lru_list.h"
|
||||
#include "map_in_map.h"
|
||||
@@ -577,8 +578,7 @@ static void *__htab_map_lookup_elem(struct bpf_map *map, void *key)
|
||||
struct htab_elem *l;
|
||||
u32 hash, key_size;
|
||||
|
||||
/* Must be called with rcu_read_lock. */
|
||||
WARN_ON_ONCE(!rcu_read_lock_held());
|
||||
WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held());
|
||||
|
||||
key_size = map->key_size;
|
||||
|
||||
@@ -612,7 +612,7 @@ static void *htab_map_lookup_elem(struct bpf_map *map, void *key)
|
||||
* bpf_prog
|
||||
* __htab_map_lookup_elem
|
||||
*/
|
||||
static u32 htab_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
|
||||
static int htab_map_gen_lookup(struct bpf_map *map, struct bpf_insn *insn_buf)
|
||||
{
|
||||
struct bpf_insn *insn = insn_buf;
|
||||
const int ret = BPF_REG_0;
|
||||
@@ -651,7 +651,7 @@ static void *htab_lru_map_lookup_elem_sys(struct bpf_map *map, void *key)
|
||||
return __htab_lru_map_lookup_elem(map, key, false);
|
||||
}
|
||||
|
||||
static u32 htab_lru_map_gen_lookup(struct bpf_map *map,
|
||||
static int htab_lru_map_gen_lookup(struct bpf_map *map,
|
||||
struct bpf_insn *insn_buf)
|
||||
{
|
||||
struct bpf_insn *insn = insn_buf;
|
||||
@@ -941,7 +941,7 @@ static int htab_map_update_elem(struct bpf_map *map, void *key, void *value,
|
||||
/* unknown flags */
|
||||
return -EINVAL;
|
||||
|
||||
WARN_ON_ONCE(!rcu_read_lock_held());
|
||||
WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held());
|
||||
|
||||
key_size = map->key_size;
|
||||
|
||||
@@ -1032,7 +1032,7 @@ static int htab_lru_map_update_elem(struct bpf_map *map, void *key, void *value,
|
||||
/* unknown flags */
|
||||
return -EINVAL;
|
||||
|
||||
WARN_ON_ONCE(!rcu_read_lock_held());
|
||||
WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held());
|
||||
|
||||
key_size = map->key_size;
|
||||
|
||||
@@ -1220,7 +1220,7 @@ static int htab_map_delete_elem(struct bpf_map *map, void *key)
|
||||
u32 hash, key_size;
|
||||
int ret = -ENOENT;
|
||||
|
||||
WARN_ON_ONCE(!rcu_read_lock_held());
|
||||
WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held());
|
||||
|
||||
key_size = map->key_size;
|
||||
|
||||
@@ -1252,7 +1252,7 @@ static int htab_lru_map_delete_elem(struct bpf_map *map, void *key)
|
||||
u32 hash, key_size;
|
||||
int ret = -ENOENT;
|
||||
|
||||
WARN_ON_ONCE(!rcu_read_lock_held());
|
||||
WARN_ON_ONCE(!rcu_read_lock_held() && !rcu_read_lock_trace_held());
|
||||
|
||||
key_size = map->key_size;
|
||||
|
||||
@@ -1803,6 +1803,7 @@ static const struct bpf_iter_seq_info iter_seq_info = {
|
||||
|
||||
static int htab_map_btf_id;
|
||||
const struct bpf_map_ops htab_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc_check = htab_map_alloc_check,
|
||||
.map_alloc = htab_map_alloc,
|
||||
.map_free = htab_map_free,
|
||||
@@ -1820,6 +1821,7 @@ const struct bpf_map_ops htab_map_ops = {
|
||||
|
||||
static int htab_lru_map_btf_id;
|
||||
const struct bpf_map_ops htab_lru_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc_check = htab_map_alloc_check,
|
||||
.map_alloc = htab_map_alloc,
|
||||
.map_free = htab_map_free,
|
||||
@@ -1940,6 +1942,7 @@ static void htab_percpu_map_seq_show_elem(struct bpf_map *map, void *key,
|
||||
|
||||
static int htab_percpu_map_btf_id;
|
||||
const struct bpf_map_ops htab_percpu_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc_check = htab_map_alloc_check,
|
||||
.map_alloc = htab_map_alloc,
|
||||
.map_free = htab_map_free,
|
||||
@@ -1956,6 +1959,7 @@ const struct bpf_map_ops htab_percpu_map_ops = {
|
||||
|
||||
static int htab_lru_percpu_map_btf_id;
|
||||
const struct bpf_map_ops htab_lru_percpu_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc_check = htab_map_alloc_check,
|
||||
.map_alloc = htab_map_alloc,
|
||||
.map_free = htab_map_free,
|
||||
@@ -2066,7 +2070,7 @@ static void *htab_of_map_lookup_elem(struct bpf_map *map, void *key)
|
||||
return READ_ONCE(*inner_map);
|
||||
}
|
||||
|
||||
static u32 htab_of_map_gen_lookup(struct bpf_map *map,
|
||||
static int htab_of_map_gen_lookup(struct bpf_map *map,
|
||||
struct bpf_insn *insn_buf)
|
||||
{
|
||||
struct bpf_insn *insn = insn_buf;
|
||||
|
@@ -601,6 +601,56 @@ const struct bpf_func_proto bpf_event_output_data_proto = {
|
||||
.arg5_type = ARG_CONST_SIZE_OR_ZERO,
|
||||
};
|
||||
|
||||
BPF_CALL_3(bpf_copy_from_user, void *, dst, u32, size,
|
||||
const void __user *, user_ptr)
|
||||
{
|
||||
int ret = copy_from_user(dst, user_ptr, size);
|
||||
|
||||
if (unlikely(ret)) {
|
||||
memset(dst, 0, size);
|
||||
ret = -EFAULT;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
const struct bpf_func_proto bpf_copy_from_user_proto = {
|
||||
.func = bpf_copy_from_user,
|
||||
.gpl_only = false,
|
||||
.ret_type = RET_INTEGER,
|
||||
.arg1_type = ARG_PTR_TO_UNINIT_MEM,
|
||||
.arg2_type = ARG_CONST_SIZE_OR_ZERO,
|
||||
.arg3_type = ARG_ANYTHING,
|
||||
};
|
||||
|
||||
BPF_CALL_2(bpf_per_cpu_ptr, const void *, ptr, u32, cpu)
|
||||
{
|
||||
if (cpu >= nr_cpu_ids)
|
||||
return (unsigned long)NULL;
|
||||
|
||||
return (unsigned long)per_cpu_ptr((const void __percpu *)ptr, cpu);
|
||||
}
|
||||
|
||||
const struct bpf_func_proto bpf_per_cpu_ptr_proto = {
|
||||
.func = bpf_per_cpu_ptr,
|
||||
.gpl_only = false,
|
||||
.ret_type = RET_PTR_TO_MEM_OR_BTF_ID_OR_NULL,
|
||||
.arg1_type = ARG_PTR_TO_PERCPU_BTF_ID,
|
||||
.arg2_type = ARG_ANYTHING,
|
||||
};
|
||||
|
||||
BPF_CALL_1(bpf_this_cpu_ptr, const void *, percpu_ptr)
|
||||
{
|
||||
return (unsigned long)this_cpu_ptr((const void __percpu *)percpu_ptr);
|
||||
}
|
||||
|
||||
const struct bpf_func_proto bpf_this_cpu_ptr_proto = {
|
||||
.func = bpf_this_cpu_ptr,
|
||||
.gpl_only = false,
|
||||
.ret_type = RET_PTR_TO_MEM_OR_BTF_ID,
|
||||
.arg1_type = ARG_PTR_TO_PERCPU_BTF_ID,
|
||||
};
|
||||
|
||||
const struct bpf_func_proto bpf_get_current_task_proto __weak;
|
||||
const struct bpf_func_proto bpf_probe_read_user_proto __weak;
|
||||
const struct bpf_func_proto bpf_probe_read_user_str_proto __weak;
|
||||
@@ -661,8 +711,16 @@ bpf_base_func_proto(enum bpf_func_id func_id)
|
||||
if (!perfmon_capable())
|
||||
return NULL;
|
||||
return bpf_get_trace_printk_proto();
|
||||
case BPF_FUNC_snprintf_btf:
|
||||
if (!perfmon_capable())
|
||||
return NULL;
|
||||
return &bpf_snprintf_btf_proto;
|
||||
case BPF_FUNC_jiffies64:
|
||||
return &bpf_jiffies64_proto;
|
||||
case BPF_FUNC_bpf_per_cpu_ptr:
|
||||
return &bpf_per_cpu_ptr_proto;
|
||||
case BPF_FUNC_bpf_this_cpu_ptr:
|
||||
return &bpf_this_cpu_ptr_proto;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
@@ -20,6 +20,7 @@
|
||||
#include <linux/filter.h>
|
||||
#include <linux/bpf.h>
|
||||
#include <linux/bpf_trace.h>
|
||||
#include "preload/bpf_preload.h"
|
||||
|
||||
enum bpf_type {
|
||||
BPF_TYPE_UNSPEC = 0,
|
||||
@@ -371,9 +372,10 @@ static struct dentry *
|
||||
bpf_lookup(struct inode *dir, struct dentry *dentry, unsigned flags)
|
||||
{
|
||||
/* Dots in names (e.g. "/sys/fs/bpf/foo.bar") are reserved for future
|
||||
* extensions.
|
||||
* extensions. That allows popoulate_bpffs() create special files.
|
||||
*/
|
||||
if (strchr(dentry->d_name.name, '.'))
|
||||
if ((dir->i_mode & S_IALLUGO) &&
|
||||
strchr(dentry->d_name.name, '.'))
|
||||
return ERR_PTR(-EPERM);
|
||||
|
||||
return simple_lookup(dir, dentry, flags);
|
||||
@@ -411,6 +413,27 @@ static const struct inode_operations bpf_dir_iops = {
|
||||
.unlink = simple_unlink,
|
||||
};
|
||||
|
||||
/* pin iterator link into bpffs */
|
||||
static int bpf_iter_link_pin_kernel(struct dentry *parent,
|
||||
const char *name, struct bpf_link *link)
|
||||
{
|
||||
umode_t mode = S_IFREG | S_IRUSR;
|
||||
struct dentry *dentry;
|
||||
int ret;
|
||||
|
||||
inode_lock(parent->d_inode);
|
||||
dentry = lookup_one_len(name, parent, strlen(name));
|
||||
if (IS_ERR(dentry)) {
|
||||
inode_unlock(parent->d_inode);
|
||||
return PTR_ERR(dentry);
|
||||
}
|
||||
ret = bpf_mkobj_ops(dentry, mode, link, &bpf_link_iops,
|
||||
&bpf_iter_fops);
|
||||
dput(dentry);
|
||||
inode_unlock(parent->d_inode);
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int bpf_obj_do_pin(const char __user *pathname, void *raw,
|
||||
enum bpf_type type)
|
||||
{
|
||||
@@ -640,6 +663,91 @@ static int bpf_parse_param(struct fs_context *fc, struct fs_parameter *param)
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct bpf_preload_ops *bpf_preload_ops;
|
||||
EXPORT_SYMBOL_GPL(bpf_preload_ops);
|
||||
|
||||
static bool bpf_preload_mod_get(void)
|
||||
{
|
||||
/* If bpf_preload.ko wasn't loaded earlier then load it now.
|
||||
* When bpf_preload is built into vmlinux the module's __init
|
||||
* function will populate it.
|
||||
*/
|
||||
if (!bpf_preload_ops) {
|
||||
request_module("bpf_preload");
|
||||
if (!bpf_preload_ops)
|
||||
return false;
|
||||
}
|
||||
/* And grab the reference, so the module doesn't disappear while the
|
||||
* kernel is interacting with the kernel module and its UMD.
|
||||
*/
|
||||
if (!try_module_get(bpf_preload_ops->owner)) {
|
||||
pr_err("bpf_preload module get failed.\n");
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
static void bpf_preload_mod_put(void)
|
||||
{
|
||||
if (bpf_preload_ops)
|
||||
/* now user can "rmmod bpf_preload" if necessary */
|
||||
module_put(bpf_preload_ops->owner);
|
||||
}
|
||||
|
||||
static DEFINE_MUTEX(bpf_preload_lock);
|
||||
|
||||
static int populate_bpffs(struct dentry *parent)
|
||||
{
|
||||
struct bpf_preload_info objs[BPF_PRELOAD_LINKS] = {};
|
||||
struct bpf_link *links[BPF_PRELOAD_LINKS] = {};
|
||||
int err = 0, i;
|
||||
|
||||
/* grab the mutex to make sure the kernel interactions with bpf_preload
|
||||
* UMD are serialized
|
||||
*/
|
||||
mutex_lock(&bpf_preload_lock);
|
||||
|
||||
/* if bpf_preload.ko wasn't built into vmlinux then load it */
|
||||
if (!bpf_preload_mod_get())
|
||||
goto out;
|
||||
|
||||
if (!bpf_preload_ops->info.tgid) {
|
||||
/* preload() will start UMD that will load BPF iterator programs */
|
||||
err = bpf_preload_ops->preload(objs);
|
||||
if (err)
|
||||
goto out_put;
|
||||
for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
|
||||
links[i] = bpf_link_by_id(objs[i].link_id);
|
||||
if (IS_ERR(links[i])) {
|
||||
err = PTR_ERR(links[i]);
|
||||
goto out_put;
|
||||
}
|
||||
}
|
||||
for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
|
||||
err = bpf_iter_link_pin_kernel(parent,
|
||||
objs[i].link_name, links[i]);
|
||||
if (err)
|
||||
goto out_put;
|
||||
/* do not unlink successfully pinned links even
|
||||
* if later link fails to pin
|
||||
*/
|
||||
links[i] = NULL;
|
||||
}
|
||||
/* finish() will tell UMD process to exit */
|
||||
err = bpf_preload_ops->finish();
|
||||
if (err)
|
||||
goto out_put;
|
||||
}
|
||||
out_put:
|
||||
bpf_preload_mod_put();
|
||||
out:
|
||||
mutex_unlock(&bpf_preload_lock);
|
||||
for (i = 0; i < BPF_PRELOAD_LINKS && err; i++)
|
||||
if (!IS_ERR_OR_NULL(links[i]))
|
||||
bpf_link_put(links[i]);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
|
||||
{
|
||||
static const struct tree_descr bpf_rfiles[] = { { "" } };
|
||||
@@ -656,8 +764,8 @@ static int bpf_fill_super(struct super_block *sb, struct fs_context *fc)
|
||||
inode = sb->s_root->d_inode;
|
||||
inode->i_op = &bpf_dir_iops;
|
||||
inode->i_mode &= ~S_IALLUGO;
|
||||
populate_bpffs(sb->s_root);
|
||||
inode->i_mode |= S_ISVTX | opts->mode;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -707,6 +815,8 @@ static int __init bpf_init(void)
|
||||
{
|
||||
int ret;
|
||||
|
||||
mutex_init(&bpf_preload_lock);
|
||||
|
||||
ret = sysfs_create_mount_point(fs_kobj, "bpf");
|
||||
if (ret)
|
||||
return ret;
|
||||
|
@@ -732,6 +732,7 @@ static int trie_check_btf(const struct bpf_map *map,
|
||||
|
||||
static int trie_map_btf_id;
|
||||
const struct bpf_map_ops trie_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc = trie_alloc,
|
||||
.map_free = trie_free,
|
||||
.map_get_next_key = trie_get_next_key,
|
||||
|
@@ -17,23 +17,17 @@ struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd)
|
||||
if (IS_ERR(inner_map))
|
||||
return inner_map;
|
||||
|
||||
/* prog_array->aux->{type,jited} is a runtime binding.
|
||||
* Doing static check alone in the verifier is not enough.
|
||||
*/
|
||||
if (inner_map->map_type == BPF_MAP_TYPE_PROG_ARRAY ||
|
||||
inner_map->map_type == BPF_MAP_TYPE_CGROUP_STORAGE ||
|
||||
inner_map->map_type == BPF_MAP_TYPE_PERCPU_CGROUP_STORAGE ||
|
||||
inner_map->map_type == BPF_MAP_TYPE_STRUCT_OPS) {
|
||||
fdput(f);
|
||||
return ERR_PTR(-ENOTSUPP);
|
||||
}
|
||||
|
||||
/* Does not support >1 level map-in-map */
|
||||
if (inner_map->inner_map_meta) {
|
||||
fdput(f);
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
if (!inner_map->ops->map_meta_equal) {
|
||||
fdput(f);
|
||||
return ERR_PTR(-ENOTSUPP);
|
||||
}
|
||||
|
||||
if (map_value_has_spin_lock(inner_map)) {
|
||||
fdput(f);
|
||||
return ERR_PTR(-ENOTSUPP);
|
||||
@@ -81,15 +75,14 @@ bool bpf_map_meta_equal(const struct bpf_map *meta0,
|
||||
return meta0->map_type == meta1->map_type &&
|
||||
meta0->key_size == meta1->key_size &&
|
||||
meta0->value_size == meta1->value_size &&
|
||||
meta0->map_flags == meta1->map_flags &&
|
||||
meta0->max_entries == meta1->max_entries;
|
||||
meta0->map_flags == meta1->map_flags;
|
||||
}
|
||||
|
||||
void *bpf_map_fd_get_ptr(struct bpf_map *map,
|
||||
struct file *map_file /* not used */,
|
||||
int ufd)
|
||||
{
|
||||
struct bpf_map *inner_map;
|
||||
struct bpf_map *inner_map, *inner_map_meta;
|
||||
struct fd f;
|
||||
|
||||
f = fdget(ufd);
|
||||
@@ -97,7 +90,8 @@ void *bpf_map_fd_get_ptr(struct bpf_map *map,
|
||||
if (IS_ERR(inner_map))
|
||||
return inner_map;
|
||||
|
||||
if (bpf_map_meta_equal(map->inner_map_meta, inner_map))
|
||||
inner_map_meta = map->inner_map_meta;
|
||||
if (inner_map_meta->ops->map_meta_equal(inner_map_meta, inner_map))
|
||||
bpf_map_inc(inner_map);
|
||||
else
|
||||
inner_map = ERR_PTR(-EINVAL);
|
||||
|
@@ -11,8 +11,6 @@ struct bpf_map;
|
||||
|
||||
struct bpf_map *bpf_map_meta_alloc(int inner_map_ufd);
|
||||
void bpf_map_meta_free(struct bpf_map *map_meta);
|
||||
bool bpf_map_meta_equal(const struct bpf_map *meta0,
|
||||
const struct bpf_map *meta1);
|
||||
void *bpf_map_fd_get_ptr(struct bpf_map *map, struct file *map_file,
|
||||
int ufd);
|
||||
void bpf_map_fd_put_ptr(void *ptr);
|
||||
|
@@ -149,6 +149,19 @@ static void bpf_iter_detach_map(struct bpf_iter_aux_info *aux)
|
||||
bpf_map_put_with_uref(aux->map);
|
||||
}
|
||||
|
||||
void bpf_iter_map_show_fdinfo(const struct bpf_iter_aux_info *aux,
|
||||
struct seq_file *seq)
|
||||
{
|
||||
seq_printf(seq, "map_id:\t%u\n", aux->map->id);
|
||||
}
|
||||
|
||||
int bpf_iter_map_fill_link_info(const struct bpf_iter_aux_info *aux,
|
||||
struct bpf_link_info *info)
|
||||
{
|
||||
info->iter.map.map_id = aux->map->id;
|
||||
return 0;
|
||||
}
|
||||
|
||||
DEFINE_BPF_ITER_FUNC(bpf_map_elem, struct bpf_iter_meta *meta,
|
||||
struct bpf_map *map, void *key, void *value)
|
||||
|
||||
@@ -156,6 +169,8 @@ static const struct bpf_iter_reg bpf_map_elem_reg_info = {
|
||||
.target = "bpf_map_elem",
|
||||
.attach_target = bpf_iter_attach_map,
|
||||
.detach_target = bpf_iter_detach_map,
|
||||
.show_fdinfo = bpf_iter_map_show_fdinfo,
|
||||
.fill_link_info = bpf_iter_map_fill_link_info,
|
||||
.ctx_arg_info_size = 2,
|
||||
.ctx_arg_info = {
|
||||
{ offsetof(struct bpf_iter__bpf_map_elem, key),
|
||||
|
@@ -17,6 +17,8 @@ int pcpu_freelist_init(struct pcpu_freelist *s)
|
||||
raw_spin_lock_init(&head->lock);
|
||||
head->first = NULL;
|
||||
}
|
||||
raw_spin_lock_init(&s->extralist.lock);
|
||||
s->extralist.first = NULL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -40,12 +42,50 @@ static inline void ___pcpu_freelist_push(struct pcpu_freelist_head *head,
|
||||
raw_spin_unlock(&head->lock);
|
||||
}
|
||||
|
||||
static inline bool pcpu_freelist_try_push_extra(struct pcpu_freelist *s,
|
||||
struct pcpu_freelist_node *node)
|
||||
{
|
||||
if (!raw_spin_trylock(&s->extralist.lock))
|
||||
return false;
|
||||
|
||||
pcpu_freelist_push_node(&s->extralist, node);
|
||||
raw_spin_unlock(&s->extralist.lock);
|
||||
return true;
|
||||
}
|
||||
|
||||
static inline void ___pcpu_freelist_push_nmi(struct pcpu_freelist *s,
|
||||
struct pcpu_freelist_node *node)
|
||||
{
|
||||
int cpu, orig_cpu;
|
||||
|
||||
orig_cpu = cpu = raw_smp_processor_id();
|
||||
while (1) {
|
||||
struct pcpu_freelist_head *head;
|
||||
|
||||
head = per_cpu_ptr(s->freelist, cpu);
|
||||
if (raw_spin_trylock(&head->lock)) {
|
||||
pcpu_freelist_push_node(head, node);
|
||||
raw_spin_unlock(&head->lock);
|
||||
return;
|
||||
}
|
||||
cpu = cpumask_next(cpu, cpu_possible_mask);
|
||||
if (cpu >= nr_cpu_ids)
|
||||
cpu = 0;
|
||||
|
||||
/* cannot lock any per cpu lock, try extralist */
|
||||
if (cpu == orig_cpu &&
|
||||
pcpu_freelist_try_push_extra(s, node))
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
void __pcpu_freelist_push(struct pcpu_freelist *s,
|
||||
struct pcpu_freelist_node *node)
|
||||
{
|
||||
struct pcpu_freelist_head *head = this_cpu_ptr(s->freelist);
|
||||
|
||||
___pcpu_freelist_push(head, node);
|
||||
if (in_nmi())
|
||||
___pcpu_freelist_push_nmi(s, node);
|
||||
else
|
||||
___pcpu_freelist_push(this_cpu_ptr(s->freelist), node);
|
||||
}
|
||||
|
||||
void pcpu_freelist_push(struct pcpu_freelist *s,
|
||||
@@ -81,7 +121,7 @@ again:
|
||||
}
|
||||
}
|
||||
|
||||
struct pcpu_freelist_node *__pcpu_freelist_pop(struct pcpu_freelist *s)
|
||||
static struct pcpu_freelist_node *___pcpu_freelist_pop(struct pcpu_freelist *s)
|
||||
{
|
||||
struct pcpu_freelist_head *head;
|
||||
struct pcpu_freelist_node *node;
|
||||
@@ -102,8 +142,59 @@ struct pcpu_freelist_node *__pcpu_freelist_pop(struct pcpu_freelist *s)
|
||||
if (cpu >= nr_cpu_ids)
|
||||
cpu = 0;
|
||||
if (cpu == orig_cpu)
|
||||
return NULL;
|
||||
break;
|
||||
}
|
||||
|
||||
/* per cpu lists are all empty, try extralist */
|
||||
raw_spin_lock(&s->extralist.lock);
|
||||
node = s->extralist.first;
|
||||
if (node)
|
||||
s->extralist.first = node->next;
|
||||
raw_spin_unlock(&s->extralist.lock);
|
||||
return node;
|
||||
}
|
||||
|
||||
static struct pcpu_freelist_node *
|
||||
___pcpu_freelist_pop_nmi(struct pcpu_freelist *s)
|
||||
{
|
||||
struct pcpu_freelist_head *head;
|
||||
struct pcpu_freelist_node *node;
|
||||
int orig_cpu, cpu;
|
||||
|
||||
orig_cpu = cpu = raw_smp_processor_id();
|
||||
while (1) {
|
||||
head = per_cpu_ptr(s->freelist, cpu);
|
||||
if (raw_spin_trylock(&head->lock)) {
|
||||
node = head->first;
|
||||
if (node) {
|
||||
head->first = node->next;
|
||||
raw_spin_unlock(&head->lock);
|
||||
return node;
|
||||
}
|
||||
raw_spin_unlock(&head->lock);
|
||||
}
|
||||
cpu = cpumask_next(cpu, cpu_possible_mask);
|
||||
if (cpu >= nr_cpu_ids)
|
||||
cpu = 0;
|
||||
if (cpu == orig_cpu)
|
||||
break;
|
||||
}
|
||||
|
||||
/* cannot pop from per cpu lists, try extralist */
|
||||
if (!raw_spin_trylock(&s->extralist.lock))
|
||||
return NULL;
|
||||
node = s->extralist.first;
|
||||
if (node)
|
||||
s->extralist.first = node->next;
|
||||
raw_spin_unlock(&s->extralist.lock);
|
||||
return node;
|
||||
}
|
||||
|
||||
struct pcpu_freelist_node *__pcpu_freelist_pop(struct pcpu_freelist *s)
|
||||
{
|
||||
if (in_nmi())
|
||||
return ___pcpu_freelist_pop_nmi(s);
|
||||
return ___pcpu_freelist_pop(s);
|
||||
}
|
||||
|
||||
struct pcpu_freelist_node *pcpu_freelist_pop(struct pcpu_freelist *s)
|
||||
|
@@ -13,6 +13,7 @@ struct pcpu_freelist_head {
|
||||
|
||||
struct pcpu_freelist {
|
||||
struct pcpu_freelist_head __percpu *freelist;
|
||||
struct pcpu_freelist_head extralist;
|
||||
};
|
||||
|
||||
struct pcpu_freelist_node {
|
||||
|
4
kernel/bpf/preload/.gitignore
vendored
Normal file
4
kernel/bpf/preload/.gitignore
vendored
Normal file
@@ -0,0 +1,4 @@
|
||||
/FEATURE-DUMP.libbpf
|
||||
/bpf_helper_defs.h
|
||||
/feature
|
||||
/bpf_preload_umd
|
26
kernel/bpf/preload/Kconfig
Normal file
26
kernel/bpf/preload/Kconfig
Normal file
@@ -0,0 +1,26 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
config USERMODE_DRIVER
|
||||
bool
|
||||
default n
|
||||
|
||||
menuconfig BPF_PRELOAD
|
||||
bool "Preload BPF file system with kernel specific program and map iterators"
|
||||
depends on BPF
|
||||
# The dependency on !COMPILE_TEST prevents it from being enabled
|
||||
# in allmodconfig or allyesconfig configurations
|
||||
depends on !COMPILE_TEST
|
||||
select USERMODE_DRIVER
|
||||
help
|
||||
This builds kernel module with several embedded BPF programs that are
|
||||
pinned into BPF FS mount point as human readable files that are
|
||||
useful in debugging and introspection of BPF programs and maps.
|
||||
|
||||
if BPF_PRELOAD
|
||||
config BPF_PRELOAD_UMD
|
||||
tristate "bpf_preload kernel module with user mode driver"
|
||||
depends on CC_CAN_LINK
|
||||
depends on m || CC_CAN_LINK_STATIC
|
||||
default m
|
||||
help
|
||||
This builds bpf_preload kernel module with embedded user mode driver.
|
||||
endif
|
25
kernel/bpf/preload/Makefile
Normal file
25
kernel/bpf/preload/Makefile
Normal file
@@ -0,0 +1,25 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
|
||||
LIBBPF_SRCS = $(srctree)/tools/lib/bpf/
|
||||
LIBBPF_A = $(obj)/libbpf.a
|
||||
LIBBPF_OUT = $(abspath $(obj))
|
||||
|
||||
$(LIBBPF_A):
|
||||
$(Q)$(MAKE) -C $(LIBBPF_SRCS) OUTPUT=$(LIBBPF_OUT)/ $(LIBBPF_OUT)/libbpf.a
|
||||
|
||||
userccflags += -I $(srctree)/tools/include/ -I $(srctree)/tools/include/uapi \
|
||||
-I $(srctree)/tools/lib/ -Wno-unused-result
|
||||
|
||||
userprogs := bpf_preload_umd
|
||||
|
||||
clean-files := $(userprogs) bpf_helper_defs.h FEATURE-DUMP.libbpf staticobjs/ feature/
|
||||
|
||||
bpf_preload_umd-objs := iterators/iterators.o
|
||||
bpf_preload_umd-userldlibs := $(LIBBPF_A) -lelf -lz
|
||||
|
||||
$(obj)/bpf_preload_umd: $(LIBBPF_A)
|
||||
|
||||
$(obj)/bpf_preload_umd_blob.o: $(obj)/bpf_preload_umd
|
||||
|
||||
obj-$(CONFIG_BPF_PRELOAD_UMD) += bpf_preload.o
|
||||
bpf_preload-objs += bpf_preload_kern.o bpf_preload_umd_blob.o
|
16
kernel/bpf/preload/bpf_preload.h
Normal file
16
kernel/bpf/preload/bpf_preload.h
Normal file
@@ -0,0 +1,16 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _BPF_PRELOAD_H
|
||||
#define _BPF_PRELOAD_H
|
||||
|
||||
#include <linux/usermode_driver.h>
|
||||
#include "iterators/bpf_preload_common.h"
|
||||
|
||||
struct bpf_preload_ops {
|
||||
struct umd_info info;
|
||||
int (*preload)(struct bpf_preload_info *);
|
||||
int (*finish)(void);
|
||||
struct module *owner;
|
||||
};
|
||||
extern struct bpf_preload_ops *bpf_preload_ops;
|
||||
#define BPF_PRELOAD_LINKS 2
|
||||
#endif
|
91
kernel/bpf/preload/bpf_preload_kern.c
Normal file
91
kernel/bpf/preload/bpf_preload_kern.c
Normal file
@@ -0,0 +1,91 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
|
||||
#include <linux/init.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/pid.h>
|
||||
#include <linux/fs.h>
|
||||
#include <linux/sched/signal.h>
|
||||
#include "bpf_preload.h"
|
||||
|
||||
extern char bpf_preload_umd_start;
|
||||
extern char bpf_preload_umd_end;
|
||||
|
||||
static int preload(struct bpf_preload_info *obj);
|
||||
static int finish(void);
|
||||
|
||||
static struct bpf_preload_ops umd_ops = {
|
||||
.info.driver_name = "bpf_preload",
|
||||
.preload = preload,
|
||||
.finish = finish,
|
||||
.owner = THIS_MODULE,
|
||||
};
|
||||
|
||||
static int preload(struct bpf_preload_info *obj)
|
||||
{
|
||||
int magic = BPF_PRELOAD_START;
|
||||
loff_t pos = 0;
|
||||
int i, err;
|
||||
ssize_t n;
|
||||
|
||||
err = fork_usermode_driver(&umd_ops.info);
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
/* send the start magic to let UMD proceed with loading BPF progs */
|
||||
n = kernel_write(umd_ops.info.pipe_to_umh,
|
||||
&magic, sizeof(magic), &pos);
|
||||
if (n != sizeof(magic))
|
||||
return -EPIPE;
|
||||
|
||||
/* receive bpf_link IDs and names from UMD */
|
||||
pos = 0;
|
||||
for (i = 0; i < BPF_PRELOAD_LINKS; i++) {
|
||||
n = kernel_read(umd_ops.info.pipe_from_umh,
|
||||
&obj[i], sizeof(*obj), &pos);
|
||||
if (n != sizeof(*obj))
|
||||
return -EPIPE;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int finish(void)
|
||||
{
|
||||
int magic = BPF_PRELOAD_END;
|
||||
struct pid *tgid;
|
||||
loff_t pos = 0;
|
||||
ssize_t n;
|
||||
|
||||
/* send the last magic to UMD. It will do a normal exit. */
|
||||
n = kernel_write(umd_ops.info.pipe_to_umh,
|
||||
&magic, sizeof(magic), &pos);
|
||||
if (n != sizeof(magic))
|
||||
return -EPIPE;
|
||||
tgid = umd_ops.info.tgid;
|
||||
wait_event(tgid->wait_pidfd, thread_group_exited(tgid));
|
||||
umd_ops.info.tgid = NULL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int __init load_umd(void)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = umd_load_blob(&umd_ops.info, &bpf_preload_umd_start,
|
||||
&bpf_preload_umd_end - &bpf_preload_umd_start);
|
||||
if (err)
|
||||
return err;
|
||||
bpf_preload_ops = &umd_ops;
|
||||
return err;
|
||||
}
|
||||
|
||||
static void __exit fini_umd(void)
|
||||
{
|
||||
bpf_preload_ops = NULL;
|
||||
/* kill UMD in case it's still there due to earlier error */
|
||||
kill_pid(umd_ops.info.tgid, SIGKILL, 1);
|
||||
umd_ops.info.tgid = NULL;
|
||||
umd_unload_blob(&umd_ops.info);
|
||||
}
|
||||
late_initcall(load_umd);
|
||||
module_exit(fini_umd);
|
||||
MODULE_LICENSE("GPL");
|
7
kernel/bpf/preload/bpf_preload_umd_blob.S
Normal file
7
kernel/bpf/preload/bpf_preload_umd_blob.S
Normal file
@@ -0,0 +1,7 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
.section .init.rodata, "a"
|
||||
.global bpf_preload_umd_start
|
||||
bpf_preload_umd_start:
|
||||
.incbin "kernel/bpf/preload/bpf_preload_umd"
|
||||
.global bpf_preload_umd_end
|
||||
bpf_preload_umd_end:
|
2
kernel/bpf/preload/iterators/.gitignore
vendored
Normal file
2
kernel/bpf/preload/iterators/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
# SPDX-License-Identifier: GPL-2.0-only
|
||||
/.output
|
57
kernel/bpf/preload/iterators/Makefile
Normal file
57
kernel/bpf/preload/iterators/Makefile
Normal file
@@ -0,0 +1,57 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
OUTPUT := .output
|
||||
CLANG ?= clang
|
||||
LLC ?= llc
|
||||
LLVM_STRIP ?= llvm-strip
|
||||
DEFAULT_BPFTOOL := $(OUTPUT)/sbin/bpftool
|
||||
BPFTOOL ?= $(DEFAULT_BPFTOOL)
|
||||
LIBBPF_SRC := $(abspath ../../../../tools/lib/bpf)
|
||||
BPFOBJ := $(OUTPUT)/libbpf.a
|
||||
BPF_INCLUDE := $(OUTPUT)
|
||||
INCLUDES := -I$(OUTPUT) -I$(BPF_INCLUDE) -I$(abspath ../../../../tools/lib) \
|
||||
-I$(abspath ../../../../tools/include/uapi)
|
||||
CFLAGS := -g -Wall
|
||||
|
||||
abs_out := $(abspath $(OUTPUT))
|
||||
ifeq ($(V),1)
|
||||
Q =
|
||||
msg =
|
||||
else
|
||||
Q = @
|
||||
msg = @printf ' %-8s %s%s\n' "$(1)" "$(notdir $(2))" "$(if $(3), $(3))";
|
||||
MAKEFLAGS += --no-print-directory
|
||||
submake_extras := feature_display=0
|
||||
endif
|
||||
|
||||
.DELETE_ON_ERROR:
|
||||
|
||||
.PHONY: all clean
|
||||
|
||||
all: iterators.skel.h
|
||||
|
||||
clean:
|
||||
$(call msg,CLEAN)
|
||||
$(Q)rm -rf $(OUTPUT) iterators
|
||||
|
||||
iterators.skel.h: $(OUTPUT)/iterators.bpf.o | $(BPFTOOL)
|
||||
$(call msg,GEN-SKEL,$@)
|
||||
$(Q)$(BPFTOOL) gen skeleton $< > $@
|
||||
|
||||
|
||||
$(OUTPUT)/iterators.bpf.o: iterators.bpf.c $(BPFOBJ) | $(OUTPUT)
|
||||
$(call msg,BPF,$@)
|
||||
$(Q)$(CLANG) -g -O2 -target bpf $(INCLUDES) \
|
||||
-c $(filter %.c,$^) -o $@ && \
|
||||
$(LLVM_STRIP) -g $@
|
||||
|
||||
$(OUTPUT):
|
||||
$(call msg,MKDIR,$@)
|
||||
$(Q)mkdir -p $(OUTPUT)
|
||||
|
||||
$(BPFOBJ): $(wildcard $(LIBBPF_SRC)/*.[ch] $(LIBBPF_SRC)/Makefile) | $(OUTPUT)
|
||||
$(Q)$(MAKE) $(submake_extras) -C $(LIBBPF_SRC) \
|
||||
OUTPUT=$(abspath $(dir $@))/ $(abspath $@)
|
||||
|
||||
$(DEFAULT_BPFTOOL):
|
||||
$(Q)$(MAKE) $(submake_extras) -C ../../../../tools/bpf/bpftool \
|
||||
prefix= OUTPUT=$(abs_out)/ DESTDIR=$(abs_out) install
|
4
kernel/bpf/preload/iterators/README
Normal file
4
kernel/bpf/preload/iterators/README
Normal file
@@ -0,0 +1,4 @@
|
||||
WARNING:
|
||||
If you change "iterators.bpf.c" do "make -j" in this directory to rebuild "iterators.skel.h".
|
||||
Make sure to have clang 10 installed.
|
||||
See Documentation/bpf/bpf_devel_QA.rst
|
13
kernel/bpf/preload/iterators/bpf_preload_common.h
Normal file
13
kernel/bpf/preload/iterators/bpf_preload_common.h
Normal file
@@ -0,0 +1,13 @@
|
||||
/* SPDX-License-Identifier: GPL-2.0 */
|
||||
#ifndef _BPF_PRELOAD_COMMON_H
|
||||
#define _BPF_PRELOAD_COMMON_H
|
||||
|
||||
#define BPF_PRELOAD_START 0x5555
|
||||
#define BPF_PRELOAD_END 0xAAAA
|
||||
|
||||
struct bpf_preload_info {
|
||||
char link_name[16];
|
||||
int link_id;
|
||||
};
|
||||
|
||||
#endif
|
114
kernel/bpf/preload/iterators/iterators.bpf.c
Normal file
114
kernel/bpf/preload/iterators/iterators.bpf.c
Normal file
@@ -0,0 +1,114 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright (c) 2020 Facebook */
|
||||
#include <linux/bpf.h>
|
||||
#include <bpf/bpf_helpers.h>
|
||||
#include <bpf/bpf_tracing.h>
|
||||
#include <bpf/bpf_core_read.h>
|
||||
|
||||
#pragma clang attribute push (__attribute__((preserve_access_index)), apply_to = record)
|
||||
struct seq_file;
|
||||
struct bpf_iter_meta {
|
||||
struct seq_file *seq;
|
||||
__u64 session_id;
|
||||
__u64 seq_num;
|
||||
};
|
||||
|
||||
struct bpf_map {
|
||||
__u32 id;
|
||||
char name[16];
|
||||
__u32 max_entries;
|
||||
};
|
||||
|
||||
struct bpf_iter__bpf_map {
|
||||
struct bpf_iter_meta *meta;
|
||||
struct bpf_map *map;
|
||||
};
|
||||
|
||||
struct btf_type {
|
||||
__u32 name_off;
|
||||
};
|
||||
|
||||
struct btf_header {
|
||||
__u32 str_len;
|
||||
};
|
||||
|
||||
struct btf {
|
||||
const char *strings;
|
||||
struct btf_type **types;
|
||||
struct btf_header hdr;
|
||||
};
|
||||
|
||||
struct bpf_prog_aux {
|
||||
__u32 id;
|
||||
char name[16];
|
||||
const char *attach_func_name;
|
||||
struct bpf_prog *dst_prog;
|
||||
struct bpf_func_info *func_info;
|
||||
struct btf *btf;
|
||||
};
|
||||
|
||||
struct bpf_prog {
|
||||
struct bpf_prog_aux *aux;
|
||||
};
|
||||
|
||||
struct bpf_iter__bpf_prog {
|
||||
struct bpf_iter_meta *meta;
|
||||
struct bpf_prog *prog;
|
||||
};
|
||||
#pragma clang attribute pop
|
||||
|
||||
static const char *get_name(struct btf *btf, long btf_id, const char *fallback)
|
||||
{
|
||||
struct btf_type **types, *t;
|
||||
unsigned int name_off;
|
||||
const char *str;
|
||||
|
||||
if (!btf)
|
||||
return fallback;
|
||||
str = btf->strings;
|
||||
types = btf->types;
|
||||
bpf_probe_read_kernel(&t, sizeof(t), types + btf_id);
|
||||
name_off = BPF_CORE_READ(t, name_off);
|
||||
if (name_off >= btf->hdr.str_len)
|
||||
return fallback;
|
||||
return str + name_off;
|
||||
}
|
||||
|
||||
SEC("iter/bpf_map")
|
||||
int dump_bpf_map(struct bpf_iter__bpf_map *ctx)
|
||||
{
|
||||
struct seq_file *seq = ctx->meta->seq;
|
||||
__u64 seq_num = ctx->meta->seq_num;
|
||||
struct bpf_map *map = ctx->map;
|
||||
|
||||
if (!map)
|
||||
return 0;
|
||||
|
||||
if (seq_num == 0)
|
||||
BPF_SEQ_PRINTF(seq, " id name max_entries\n");
|
||||
|
||||
BPF_SEQ_PRINTF(seq, "%4u %-16s%6d\n", map->id, map->name, map->max_entries);
|
||||
return 0;
|
||||
}
|
||||
|
||||
SEC("iter/bpf_prog")
|
||||
int dump_bpf_prog(struct bpf_iter__bpf_prog *ctx)
|
||||
{
|
||||
struct seq_file *seq = ctx->meta->seq;
|
||||
__u64 seq_num = ctx->meta->seq_num;
|
||||
struct bpf_prog *prog = ctx->prog;
|
||||
struct bpf_prog_aux *aux;
|
||||
|
||||
if (!prog)
|
||||
return 0;
|
||||
|
||||
aux = prog->aux;
|
||||
if (seq_num == 0)
|
||||
BPF_SEQ_PRINTF(seq, " id name attached\n");
|
||||
|
||||
BPF_SEQ_PRINTF(seq, "%4u %-16s %s %s\n", aux->id,
|
||||
get_name(aux->btf, aux->func_info[0].type_id, aux->name),
|
||||
aux->attach_func_name, aux->dst_prog->aux->name);
|
||||
return 0;
|
||||
}
|
||||
char LICENSE[] SEC("license") = "GPL";
|
94
kernel/bpf/preload/iterators/iterators.c
Normal file
94
kernel/bpf/preload/iterators/iterators.c
Normal file
@@ -0,0 +1,94 @@
|
||||
// SPDX-License-Identifier: GPL-2.0
|
||||
/* Copyright (c) 2020 Facebook */
|
||||
#include <argp.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <unistd.h>
|
||||
#include <fcntl.h>
|
||||
#include <sys/resource.h>
|
||||
#include <bpf/libbpf.h>
|
||||
#include <bpf/bpf.h>
|
||||
#include <sys/mount.h>
|
||||
#include "iterators.skel.h"
|
||||
#include "bpf_preload_common.h"
|
||||
|
||||
int to_kernel = -1;
|
||||
int from_kernel = 0;
|
||||
|
||||
static int send_link_to_kernel(struct bpf_link *link, const char *link_name)
|
||||
{
|
||||
struct bpf_preload_info obj = {};
|
||||
struct bpf_link_info info = {};
|
||||
__u32 info_len = sizeof(info);
|
||||
int err;
|
||||
|
||||
err = bpf_obj_get_info_by_fd(bpf_link__fd(link), &info, &info_len);
|
||||
if (err)
|
||||
return err;
|
||||
obj.link_id = info.id;
|
||||
if (strlen(link_name) >= sizeof(obj.link_name))
|
||||
return -E2BIG;
|
||||
strcpy(obj.link_name, link_name);
|
||||
if (write(to_kernel, &obj, sizeof(obj)) != sizeof(obj))
|
||||
return -EPIPE;
|
||||
return 0;
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
struct rlimit rlim = { RLIM_INFINITY, RLIM_INFINITY };
|
||||
struct iterators_bpf *skel;
|
||||
int err, magic;
|
||||
int debug_fd;
|
||||
|
||||
debug_fd = open("/dev/console", O_WRONLY | O_NOCTTY | O_CLOEXEC);
|
||||
if (debug_fd < 0)
|
||||
return 1;
|
||||
to_kernel = dup(1);
|
||||
close(1);
|
||||
dup(debug_fd);
|
||||
/* now stdin and stderr point to /dev/console */
|
||||
|
||||
read(from_kernel, &magic, sizeof(magic));
|
||||
if (magic != BPF_PRELOAD_START) {
|
||||
printf("bad start magic %d\n", magic);
|
||||
return 1;
|
||||
}
|
||||
setrlimit(RLIMIT_MEMLOCK, &rlim);
|
||||
/* libbpf opens BPF object and loads it into the kernel */
|
||||
skel = iterators_bpf__open_and_load();
|
||||
if (!skel) {
|
||||
/* iterators.skel.h is little endian.
|
||||
* libbpf doesn't support automatic little->big conversion
|
||||
* of BPF bytecode yet.
|
||||
* The program load will fail in such case.
|
||||
*/
|
||||
printf("Failed load could be due to wrong endianness\n");
|
||||
return 1;
|
||||
}
|
||||
err = iterators_bpf__attach(skel);
|
||||
if (err)
|
||||
goto cleanup;
|
||||
|
||||
/* send two bpf_link IDs with names to the kernel */
|
||||
err = send_link_to_kernel(skel->links.dump_bpf_map, "maps.debug");
|
||||
if (err)
|
||||
goto cleanup;
|
||||
err = send_link_to_kernel(skel->links.dump_bpf_prog, "progs.debug");
|
||||
if (err)
|
||||
goto cleanup;
|
||||
|
||||
/* The kernel will proceed with pinnging the links in bpffs.
|
||||
* UMD will wait on read from pipe.
|
||||
*/
|
||||
read(from_kernel, &magic, sizeof(magic));
|
||||
if (magic != BPF_PRELOAD_END) {
|
||||
printf("bad final magic %d\n", magic);
|
||||
err = -EINVAL;
|
||||
}
|
||||
cleanup:
|
||||
iterators_bpf__destroy(skel);
|
||||
|
||||
return err != 0;
|
||||
}
|
412
kernel/bpf/preload/iterators/iterators.skel.h
Normal file
412
kernel/bpf/preload/iterators/iterators.skel.h
Normal file
@@ -0,0 +1,412 @@
|
||||
/* SPDX-License-Identifier: (LGPL-2.1 OR BSD-2-Clause) */
|
||||
|
||||
/* THIS FILE IS AUTOGENERATED! */
|
||||
#ifndef __ITERATORS_BPF_SKEL_H__
|
||||
#define __ITERATORS_BPF_SKEL_H__
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <bpf/libbpf.h>
|
||||
|
||||
struct iterators_bpf {
|
||||
struct bpf_object_skeleton *skeleton;
|
||||
struct bpf_object *obj;
|
||||
struct {
|
||||
struct bpf_map *rodata;
|
||||
} maps;
|
||||
struct {
|
||||
struct bpf_program *dump_bpf_map;
|
||||
struct bpf_program *dump_bpf_prog;
|
||||
} progs;
|
||||
struct {
|
||||
struct bpf_link *dump_bpf_map;
|
||||
struct bpf_link *dump_bpf_prog;
|
||||
} links;
|
||||
struct iterators_bpf__rodata {
|
||||
char dump_bpf_map____fmt[35];
|
||||
char dump_bpf_map____fmt_1[14];
|
||||
char dump_bpf_prog____fmt[32];
|
||||
char dump_bpf_prog____fmt_2[17];
|
||||
} *rodata;
|
||||
};
|
||||
|
||||
static void
|
||||
iterators_bpf__destroy(struct iterators_bpf *obj)
|
||||
{
|
||||
if (!obj)
|
||||
return;
|
||||
if (obj->skeleton)
|
||||
bpf_object__destroy_skeleton(obj->skeleton);
|
||||
free(obj);
|
||||
}
|
||||
|
||||
static inline int
|
||||
iterators_bpf__create_skeleton(struct iterators_bpf *obj);
|
||||
|
||||
static inline struct iterators_bpf *
|
||||
iterators_bpf__open_opts(const struct bpf_object_open_opts *opts)
|
||||
{
|
||||
struct iterators_bpf *obj;
|
||||
|
||||
obj = (struct iterators_bpf *)calloc(1, sizeof(*obj));
|
||||
if (!obj)
|
||||
return NULL;
|
||||
if (iterators_bpf__create_skeleton(obj))
|
||||
goto err;
|
||||
if (bpf_object__open_skeleton(obj->skeleton, opts))
|
||||
goto err;
|
||||
|
||||
return obj;
|
||||
err:
|
||||
iterators_bpf__destroy(obj);
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static inline struct iterators_bpf *
|
||||
iterators_bpf__open(void)
|
||||
{
|
||||
return iterators_bpf__open_opts(NULL);
|
||||
}
|
||||
|
||||
static inline int
|
||||
iterators_bpf__load(struct iterators_bpf *obj)
|
||||
{
|
||||
return bpf_object__load_skeleton(obj->skeleton);
|
||||
}
|
||||
|
||||
static inline struct iterators_bpf *
|
||||
iterators_bpf__open_and_load(void)
|
||||
{
|
||||
struct iterators_bpf *obj;
|
||||
|
||||
obj = iterators_bpf__open();
|
||||
if (!obj)
|
||||
return NULL;
|
||||
if (iterators_bpf__load(obj)) {
|
||||
iterators_bpf__destroy(obj);
|
||||
return NULL;
|
||||
}
|
||||
return obj;
|
||||
}
|
||||
|
||||
static inline int
|
||||
iterators_bpf__attach(struct iterators_bpf *obj)
|
||||
{
|
||||
return bpf_object__attach_skeleton(obj->skeleton);
|
||||
}
|
||||
|
||||
static inline void
|
||||
iterators_bpf__detach(struct iterators_bpf *obj)
|
||||
{
|
||||
return bpf_object__detach_skeleton(obj->skeleton);
|
||||
}
|
||||
|
||||
static inline int
|
||||
iterators_bpf__create_skeleton(struct iterators_bpf *obj)
|
||||
{
|
||||
struct bpf_object_skeleton *s;
|
||||
|
||||
s = (struct bpf_object_skeleton *)calloc(1, sizeof(*s));
|
||||
if (!s)
|
||||
return -1;
|
||||
obj->skeleton = s;
|
||||
|
||||
s->sz = sizeof(*s);
|
||||
s->name = "iterators_bpf";
|
||||
s->obj = &obj->obj;
|
||||
|
||||
/* maps */
|
||||
s->map_cnt = 1;
|
||||
s->map_skel_sz = sizeof(*s->maps);
|
||||
s->maps = (struct bpf_map_skeleton *)calloc(s->map_cnt, s->map_skel_sz);
|
||||
if (!s->maps)
|
||||
goto err;
|
||||
|
||||
s->maps[0].name = "iterator.rodata";
|
||||
s->maps[0].map = &obj->maps.rodata;
|
||||
s->maps[0].mmaped = (void **)&obj->rodata;
|
||||
|
||||
/* programs */
|
||||
s->prog_cnt = 2;
|
||||
s->prog_skel_sz = sizeof(*s->progs);
|
||||
s->progs = (struct bpf_prog_skeleton *)calloc(s->prog_cnt, s->prog_skel_sz);
|
||||
if (!s->progs)
|
||||
goto err;
|
||||
|
||||
s->progs[0].name = "dump_bpf_map";
|
||||
s->progs[0].prog = &obj->progs.dump_bpf_map;
|
||||
s->progs[0].link = &obj->links.dump_bpf_map;
|
||||
|
||||
s->progs[1].name = "dump_bpf_prog";
|
||||
s->progs[1].prog = &obj->progs.dump_bpf_prog;
|
||||
s->progs[1].link = &obj->links.dump_bpf_prog;
|
||||
|
||||
s->data_sz = 7176;
|
||||
s->data = (void *)"\
|
||||
\x7f\x45\x4c\x46\x02\x01\x01\0\0\0\0\0\0\0\0\0\x01\0\xf7\0\x01\0\0\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\0\0\0\0\x48\x18\0\0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\x40\0\x0f\0\
|
||||
\x0e\0\x79\x12\0\0\0\0\0\0\x79\x26\0\0\0\0\0\0\x79\x17\x08\0\0\0\0\0\x15\x07\
|
||||
\x1a\0\0\0\0\0\x79\x21\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\
|
||||
\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x02\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\0\0\0\xb7\x03\0\0\x23\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x61\x71\0\
|
||||
\0\0\0\0\0\x7b\x1a\xe8\xff\0\0\0\0\xb7\x01\0\0\x04\0\0\0\xbf\x72\0\0\0\0\0\0\
|
||||
\x0f\x12\0\0\0\0\0\0\x7b\x2a\xf0\xff\0\0\0\0\x61\x71\x14\0\0\0\0\0\x7b\x1a\xf8\
|
||||
\xff\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xe8\xff\xff\xff\xbf\x61\0\0\0\0\0\
|
||||
\0\x18\x02\0\0\x23\0\0\0\0\0\0\0\0\0\0\0\xb7\x03\0\0\x0e\0\0\0\xb7\x05\0\0\x18\
|
||||
\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\x79\x12\0\0\0\0\
|
||||
\0\0\x79\x26\0\0\0\0\0\0\x79\x11\x08\0\0\0\0\0\x15\x01\x3b\0\0\0\0\0\x79\x17\0\
|
||||
\0\0\0\0\0\x79\x21\x10\0\0\0\0\0\x55\x01\x08\0\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\
|
||||
\x04\0\0\xd0\xff\xff\xff\xbf\x61\0\0\0\0\0\0\x18\x02\0\0\x31\0\0\0\0\0\0\0\0\0\
|
||||
\0\0\xb7\x03\0\0\x20\0\0\0\xb7\x05\0\0\0\0\0\0\x85\0\0\0\x7e\0\0\0\x7b\x6a\xc8\
|
||||
\xff\0\0\0\0\x61\x71\0\0\0\0\0\0\x7b\x1a\xd0\xff\0\0\0\0\xb7\x03\0\0\x04\0\0\0\
|
||||
\xbf\x79\0\0\0\0\0\0\x0f\x39\0\0\0\0\0\0\x79\x71\x28\0\0\0\0\0\x79\x78\x30\0\0\
|
||||
\0\0\0\x15\x08\x18\0\0\0\0\0\xb7\x02\0\0\0\0\0\0\x0f\x21\0\0\0\0\0\0\x61\x11\
|
||||
\x04\0\0\0\0\0\x79\x83\x08\0\0\0\0\0\x67\x01\0\0\x03\0\0\0\x0f\x13\0\0\0\0\0\0\
|
||||
\x79\x86\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf8\xff\xff\xff\xb7\x02\0\
|
||||
\0\x08\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x01\0\0\0\0\0\0\x79\xa3\xf8\xff\0\0\0\0\
|
||||
\x0f\x13\0\0\0\0\0\0\xbf\xa1\0\0\0\0\0\0\x07\x01\0\0\xf4\xff\xff\xff\xb7\x02\0\
|
||||
\0\x04\0\0\0\x85\0\0\0\x71\0\0\0\xb7\x03\0\0\x04\0\0\0\x61\xa1\xf4\xff\0\0\0\0\
|
||||
\x61\x82\x10\0\0\0\0\0\x3d\x21\x02\0\0\0\0\0\x0f\x16\0\0\0\0\0\0\xbf\x69\0\0\0\
|
||||
\0\0\0\x7b\x9a\xd8\xff\0\0\0\0\x79\x71\x18\0\0\0\0\0\x7b\x1a\xe0\xff\0\0\0\0\
|
||||
\x79\x71\x20\0\0\0\0\0\x79\x11\0\0\0\0\0\0\x0f\x31\0\0\0\0\0\0\x7b\x1a\xe8\xff\
|
||||
\0\0\0\0\xbf\xa4\0\0\0\0\0\0\x07\x04\0\0\xd0\xff\xff\xff\x79\xa1\xc8\xff\0\0\0\
|
||||
\0\x18\x02\0\0\x51\0\0\0\0\0\0\0\0\0\0\0\xb7\x03\0\0\x11\0\0\0\xb7\x05\0\0\x20\
|
||||
\0\0\0\x85\0\0\0\x7e\0\0\0\xb7\0\0\0\0\0\0\0\x95\0\0\0\0\0\0\0\x20\x20\x69\x64\
|
||||
\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x6d\
|
||||
\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x0a\0\x25\x34\x75\x20\x25\x2d\x31\x36\
|
||||
\x73\x25\x36\x64\x0a\0\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\
|
||||
\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\x74\x61\x63\x68\x65\x64\x0a\0\x25\x34\
|
||||
\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\x25\x73\x0a\0\x47\x50\x4c\0\x9f\
|
||||
\xeb\x01\0\x18\0\0\0\0\0\0\0\x1c\x04\0\0\x1c\x04\0\0\x09\x05\0\0\0\0\0\0\0\0\0\
|
||||
\x02\x02\0\0\0\x01\0\0\0\x02\0\0\x04\x10\0\0\0\x13\0\0\0\x03\0\0\0\0\0\0\0\x18\
|
||||
\0\0\0\x04\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\x08\0\0\0\0\0\0\0\0\0\0\x02\x0d\0\
|
||||
\0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x01\0\0\0\x20\0\0\0\0\0\0\x01\x04\
|
||||
\0\0\0\x20\0\0\x01\x24\0\0\0\x01\0\0\x0c\x05\0\0\0\xaf\0\0\0\x03\0\0\x04\x18\0\
|
||||
\0\0\xbd\0\0\0\x09\0\0\0\0\0\0\0\xc1\0\0\0\x0b\0\0\0\x40\0\0\0\xcc\0\0\0\x0b\0\
|
||||
\0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\x0a\0\0\0\xd4\0\0\0\0\0\0\x07\0\0\0\0\xdd\0\0\
|
||||
\0\0\0\0\x08\x0c\0\0\0\xe3\0\0\0\0\0\0\x01\x08\0\0\0\x40\0\0\0\xa4\x01\0\0\x03\
|
||||
\0\0\x04\x18\0\0\0\xac\x01\0\0\x0e\0\0\0\0\0\0\0\xaf\x01\0\0\x11\0\0\0\x20\0\0\
|
||||
\0\xb4\x01\0\0\x0e\0\0\0\xa0\0\0\0\xc0\x01\0\0\0\0\0\x08\x0f\0\0\0\xc6\x01\0\0\
|
||||
\0\0\0\x01\x04\0\0\0\x20\0\0\0\xd3\x01\0\0\0\0\0\x01\x01\0\0\0\x08\0\0\x01\0\0\
|
||||
\0\0\0\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x10\0\0\0\xd8\x01\0\0\0\0\0\x01\x04\
|
||||
\0\0\0\x20\0\0\0\0\0\0\0\0\0\0\x02\x14\0\0\0\x3c\x02\0\0\x02\0\0\x04\x10\0\0\0\
|
||||
\x13\0\0\0\x03\0\0\0\0\0\0\0\x4f\x02\0\0\x15\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\x02\
|
||||
\x18\0\0\0\0\0\0\0\x01\0\0\x0d\x06\0\0\0\x1c\0\0\0\x13\0\0\0\x54\x02\0\0\x01\0\
|
||||
\0\x0c\x16\0\0\0\xa0\x02\0\0\x01\0\0\x04\x08\0\0\0\xa9\x02\0\0\x19\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\0\x02\x1a\0\0\0\xfa\x02\0\0\x06\0\0\x04\x38\0\0\0\xac\x01\0\0\
|
||||
\x0e\0\0\0\0\0\0\0\xaf\x01\0\0\x11\0\0\0\x20\0\0\0\x07\x03\0\0\x1b\0\0\0\xc0\0\
|
||||
\0\0\x18\x03\0\0\x15\0\0\0\0\x01\0\0\x21\x03\0\0\x1d\0\0\0\x40\x01\0\0\x2b\x03\
|
||||
\0\0\x1e\0\0\0\x80\x01\0\0\0\0\0\0\0\0\0\x02\x1c\0\0\0\0\0\0\0\0\0\0\x0a\x10\0\
|
||||
\0\0\0\0\0\0\0\0\0\x02\x1f\0\0\0\0\0\0\0\0\0\0\x02\x20\0\0\0\x75\x03\0\0\x02\0\
|
||||
\0\x04\x08\0\0\0\x83\x03\0\0\x0e\0\0\0\0\0\0\0\x8c\x03\0\0\x0e\0\0\0\x20\0\0\0\
|
||||
\x2b\x03\0\0\x03\0\0\x04\x18\0\0\0\x96\x03\0\0\x1b\0\0\0\0\0\0\0\x9e\x03\0\0\
|
||||
\x21\0\0\0\x40\0\0\0\xa4\x03\0\0\x23\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\x02\x22\0\0\
|
||||
\0\0\0\0\0\0\0\0\x02\x24\0\0\0\xa8\x03\0\0\x01\0\0\x04\x04\0\0\0\xb3\x03\0\0\
|
||||
\x0e\0\0\0\0\0\0\0\x1c\x04\0\0\x01\0\0\x04\x04\0\0\0\x25\x04\0\0\x0e\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x23\0\0\0\x9b\x04\0\0\0\0\0\
|
||||
\x0e\x25\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\x12\0\0\0\x0e\0\0\0\
|
||||
\xaf\x04\0\0\0\0\0\x0e\x27\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\0\0\0\x1c\0\0\0\
|
||||
\x12\0\0\0\x20\0\0\0\xc5\x04\0\0\0\0\0\x0e\x29\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\
|
||||
\0\0\0\0\x1c\0\0\0\x12\0\0\0\x11\0\0\0\xda\x04\0\0\0\0\0\x0e\x2b\0\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\x03\0\0\0\0\x10\0\0\0\x12\0\0\0\x04\0\0\0\xf1\x04\0\0\0\0\0\x0e\
|
||||
\x2d\0\0\0\x01\0\0\0\xf9\x04\0\0\x04\0\0\x0f\0\0\0\0\x26\0\0\0\0\0\0\0\x23\0\0\
|
||||
\0\x28\0\0\0\x23\0\0\0\x0e\0\0\0\x2a\0\0\0\x31\0\0\0\x20\0\0\0\x2c\0\0\0\x51\0\
|
||||
\0\0\x11\0\0\0\x01\x05\0\0\x01\0\0\x0f\0\0\0\0\x2e\0\0\0\0\0\0\0\x04\0\0\0\0\
|
||||
\x62\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x6d\x65\
|
||||
\x74\x61\0\x6d\x61\x70\0\x63\x74\x78\0\x69\x6e\x74\0\x64\x75\x6d\x70\x5f\x62\
|
||||
\x70\x66\x5f\x6d\x61\x70\0\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\
|
||||
\x30\x3a\x30\0\x2f\x68\x6f\x6d\x65\x2f\x61\x6c\x72\x75\x61\x2f\x62\x75\x69\x6c\
|
||||
\x64\x2f\x6c\x69\x6e\x75\x78\x2f\x6b\x65\x72\x6e\x65\x6c\x2f\x62\x70\x66\x2f\
|
||||
\x70\x72\x65\x6c\x6f\x61\x64\x2f\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2f\x69\
|
||||
\x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\x70\x66\x2e\x63\0\x09\x73\x74\x72\x75\
|
||||
\x63\x74\x20\x73\x65\x71\x5f\x66\x69\x6c\x65\x20\x2a\x73\x65\x71\x20\x3d\x20\
|
||||
\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\x71\x3b\0\x62\x70\x66\x5f\
|
||||
\x69\x74\x65\x72\x5f\x6d\x65\x74\x61\0\x73\x65\x71\0\x73\x65\x73\x73\x69\x6f\
|
||||
\x6e\x5f\x69\x64\0\x73\x65\x71\x5f\x6e\x75\x6d\0\x73\x65\x71\x5f\x66\x69\x6c\
|
||||
\x65\0\x5f\x5f\x75\x36\x34\0\x6c\x6f\x6e\x67\x20\x6c\x6f\x6e\x67\x20\x75\x6e\
|
||||
\x73\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x30\x3a\x31\0\x09\x73\x74\x72\x75\
|
||||
\x63\x74\x20\x62\x70\x66\x5f\x6d\x61\x70\x20\x2a\x6d\x61\x70\x20\x3d\x20\x63\
|
||||
\x74\x78\x2d\x3e\x6d\x61\x70\x3b\0\x09\x69\x66\x20\x28\x21\x6d\x61\x70\x29\0\
|
||||
\x30\x3a\x32\0\x09\x5f\x5f\x75\x36\x34\x20\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\
|
||||
\x20\x63\x74\x78\x2d\x3e\x6d\x65\x74\x61\x2d\x3e\x73\x65\x71\x5f\x6e\x75\x6d\
|
||||
\x3b\0\x09\x69\x66\x20\x28\x73\x65\x71\x5f\x6e\x75\x6d\x20\x3d\x3d\x20\x30\x29\
|
||||
\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\
|
||||
\x71\x2c\x20\x22\x20\x20\x69\x64\x20\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\
|
||||
\x20\x20\x20\x20\x20\x20\x20\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x5c\
|
||||
\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x6d\x61\x70\0\x69\x64\0\x6e\x61\x6d\x65\0\
|
||||
\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\0\x5f\x5f\x75\x33\x32\0\x75\x6e\
|
||||
\x73\x69\x67\x6e\x65\x64\x20\x69\x6e\x74\0\x63\x68\x61\x72\0\x5f\x5f\x41\x52\
|
||||
\x52\x41\x59\x5f\x53\x49\x5a\x45\x5f\x54\x59\x50\x45\x5f\x5f\0\x09\x42\x50\x46\
|
||||
\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x25\
|
||||
\x34\x75\x20\x25\x2d\x31\x36\x73\x25\x36\x64\x5c\x6e\x22\x2c\x20\x6d\x61\x70\
|
||||
\x2d\x3e\x69\x64\x2c\x20\x6d\x61\x70\x2d\x3e\x6e\x61\x6d\x65\x2c\x20\x6d\x61\
|
||||
\x70\x2d\x3e\x6d\x61\x78\x5f\x65\x6e\x74\x72\x69\x65\x73\x29\x3b\0\x7d\0\x62\
|
||||
\x70\x66\x5f\x69\x74\x65\x72\x5f\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x70\x72\
|
||||
\x6f\x67\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x69\x74\x65\
|
||||
\x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x09\x73\x74\x72\x75\x63\x74\x20\x62\
|
||||
\x70\x66\x5f\x70\x72\x6f\x67\x20\x2a\x70\x72\x6f\x67\x20\x3d\x20\x63\x74\x78\
|
||||
\x2d\x3e\x70\x72\x6f\x67\x3b\0\x09\x69\x66\x20\x28\x21\x70\x72\x6f\x67\x29\0\
|
||||
\x62\x70\x66\x5f\x70\x72\x6f\x67\0\x61\x75\x78\0\x09\x61\x75\x78\x20\x3d\x20\
|
||||
\x70\x72\x6f\x67\x2d\x3e\x61\x75\x78\x3b\0\x09\x09\x42\x50\x46\x5f\x53\x45\x51\
|
||||
\x5f\x50\x52\x49\x4e\x54\x46\x28\x73\x65\x71\x2c\x20\x22\x20\x20\x69\x64\x20\
|
||||
\x6e\x61\x6d\x65\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x20\x61\x74\
|
||||
\x74\x61\x63\x68\x65\x64\x5c\x6e\x22\x29\x3b\0\x62\x70\x66\x5f\x70\x72\x6f\x67\
|
||||
\x5f\x61\x75\x78\0\x61\x74\x74\x61\x63\x68\x5f\x66\x75\x6e\x63\x5f\x6e\x61\x6d\
|
||||
\x65\0\x64\x73\x74\x5f\x70\x72\x6f\x67\0\x66\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\
|
||||
\x62\x74\x66\0\x09\x42\x50\x46\x5f\x53\x45\x51\x5f\x50\x52\x49\x4e\x54\x46\x28\
|
||||
\x73\x65\x71\x2c\x20\x22\x25\x34\x75\x20\x25\x2d\x31\x36\x73\x20\x25\x73\x20\
|
||||
\x25\x73\x5c\x6e\x22\x2c\x20\x61\x75\x78\x2d\x3e\x69\x64\x2c\0\x30\x3a\x34\0\
|
||||
\x30\x3a\x35\0\x09\x69\x66\x20\x28\x21\x62\x74\x66\x29\0\x62\x70\x66\x5f\x66\
|
||||
\x75\x6e\x63\x5f\x69\x6e\x66\x6f\0\x69\x6e\x73\x6e\x5f\x6f\x66\x66\0\x74\x79\
|
||||
\x70\x65\x5f\x69\x64\0\x30\0\x73\x74\x72\x69\x6e\x67\x73\0\x74\x79\x70\x65\x73\
|
||||
\0\x68\x64\x72\0\x62\x74\x66\x5f\x68\x65\x61\x64\x65\x72\0\x73\x74\x72\x5f\x6c\
|
||||
\x65\x6e\0\x09\x74\x79\x70\x65\x73\x20\x3d\x20\x62\x74\x66\x2d\x3e\x74\x79\x70\
|
||||
\x65\x73\x3b\0\x09\x62\x70\x66\x5f\x70\x72\x6f\x62\x65\x5f\x72\x65\x61\x64\x5f\
|
||||
\x6b\x65\x72\x6e\x65\x6c\x28\x26\x74\x2c\x20\x73\x69\x7a\x65\x6f\x66\x28\x74\
|
||||
\x29\x2c\x20\x74\x79\x70\x65\x73\x20\x2b\x20\x62\x74\x66\x5f\x69\x64\x29\x3b\0\
|
||||
\x09\x73\x74\x72\x20\x3d\x20\x62\x74\x66\x2d\x3e\x73\x74\x72\x69\x6e\x67\x73\
|
||||
\x3b\0\x62\x74\x66\x5f\x74\x79\x70\x65\0\x6e\x61\x6d\x65\x5f\x6f\x66\x66\0\x09\
|
||||
\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\x3d\x20\x42\x50\x46\x5f\x43\x4f\x52\x45\
|
||||
\x5f\x52\x45\x41\x44\x28\x74\x2c\x20\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x29\x3b\0\
|
||||
\x30\x3a\x32\x3a\x30\0\x09\x69\x66\x20\x28\x6e\x61\x6d\x65\x5f\x6f\x66\x66\x20\
|
||||
\x3e\x3d\x20\x62\x74\x66\x2d\x3e\x68\x64\x72\x2e\x73\x74\x72\x5f\x6c\x65\x6e\
|
||||
\x29\0\x09\x72\x65\x74\x75\x72\x6e\x20\x73\x74\x72\x20\x2b\x20\x6e\x61\x6d\x65\
|
||||
\x5f\x6f\x66\x66\x3b\0\x30\x3a\x33\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\
|
||||
\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\
|
||||
\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x31\0\x64\x75\x6d\x70\x5f\x62\x70\x66\
|
||||
\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\x75\x6d\x70\x5f\x62\x70\
|
||||
\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\x4c\x49\x43\x45\
|
||||
\x4e\x53\x45\0\x2e\x72\x6f\x64\x61\x74\x61\0\x6c\x69\x63\x65\x6e\x73\x65\0\x9f\
|
||||
\xeb\x01\0\x20\0\0\0\0\0\0\0\x24\0\0\0\x24\0\0\0\x44\x02\0\0\x68\x02\0\0\xa4\
|
||||
\x01\0\0\x08\0\0\0\x31\0\0\0\x01\0\0\0\0\0\0\0\x07\0\0\0\x62\x02\0\0\x01\0\0\0\
|
||||
\0\0\0\0\x17\0\0\0\x10\0\0\0\x31\0\0\0\x09\0\0\0\0\0\0\0\x42\0\0\0\x87\0\0\0\
|
||||
\x1e\x40\x01\0\x08\0\0\0\x42\0\0\0\x87\0\0\0\x24\x40\x01\0\x10\0\0\0\x42\0\0\0\
|
||||
\xfe\0\0\0\x1d\x48\x01\0\x18\0\0\0\x42\0\0\0\x1f\x01\0\0\x06\x50\x01\0\x20\0\0\
|
||||
\0\x42\0\0\0\x2e\x01\0\0\x1d\x44\x01\0\x28\0\0\0\x42\0\0\0\x53\x01\0\0\x06\x5c\
|
||||
\x01\0\x38\0\0\0\x42\0\0\0\x66\x01\0\0\x03\x60\x01\0\x70\0\0\0\x42\0\0\0\xec\
|
||||
\x01\0\0\x02\x68\x01\0\xf0\0\0\0\x42\0\0\0\x3a\x02\0\0\x01\x70\x01\0\x62\x02\0\
|
||||
\0\x1a\0\0\0\0\0\0\0\x42\0\0\0\x87\0\0\0\x1e\x84\x01\0\x08\0\0\0\x42\0\0\0\x87\
|
||||
\0\0\0\x24\x84\x01\0\x10\0\0\0\x42\0\0\0\x70\x02\0\0\x1f\x8c\x01\0\x18\0\0\0\
|
||||
\x42\0\0\0\x94\x02\0\0\x06\x98\x01\0\x20\0\0\0\x42\0\0\0\xad\x02\0\0\x0e\xa4\
|
||||
\x01\0\x28\0\0\0\x42\0\0\0\x2e\x01\0\0\x1d\x88\x01\0\x30\0\0\0\x42\0\0\0\x53\
|
||||
\x01\0\0\x06\xa8\x01\0\x40\0\0\0\x42\0\0\0\xbf\x02\0\0\x03\xac\x01\0\x80\0\0\0\
|
||||
\x42\0\0\0\x2f\x03\0\0\x02\xb4\x01\0\xb8\0\0\0\x42\0\0\0\x6a\x03\0\0\x06\x08\
|
||||
\x01\0\xd0\0\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\xd8\0\0\0\x42\0\0\0\xbb\x03\0\0\x0f\
|
||||
\x14\x01\0\xe0\0\0\0\x42\0\0\0\xd0\x03\0\0\x2d\x18\x01\0\xf0\0\0\0\x42\0\0\0\
|
||||
\x07\x04\0\0\x0d\x10\x01\0\0\x01\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\x08\x01\0\0\x42\
|
||||
\0\0\0\xd0\x03\0\0\x02\x18\x01\0\x20\x01\0\0\x42\0\0\0\x2e\x04\0\0\x0d\x1c\x01\
|
||||
\0\x38\x01\0\0\x42\0\0\0\0\0\0\0\0\0\0\0\x40\x01\0\0\x42\0\0\0\x2e\x04\0\0\x0d\
|
||||
\x1c\x01\0\x58\x01\0\0\x42\0\0\0\x2e\x04\0\0\x0d\x1c\x01\0\x60\x01\0\0\x42\0\0\
|
||||
\0\x5c\x04\0\0\x1b\x20\x01\0\x68\x01\0\0\x42\0\0\0\x5c\x04\0\0\x06\x20\x01\0\
|
||||
\x70\x01\0\0\x42\0\0\0\x7f\x04\0\0\x0d\x28\x01\0\x78\x01\0\0\x42\0\0\0\0\0\0\0\
|
||||
\0\0\0\0\x80\x01\0\0\x42\0\0\0\x2f\x03\0\0\x02\xb4\x01\0\xf8\x01\0\0\x42\0\0\0\
|
||||
\x3a\x02\0\0\x01\xc4\x01\0\x10\0\0\0\x31\0\0\0\x07\0\0\0\0\0\0\0\x02\0\0\0\x3e\
|
||||
\0\0\0\0\0\0\0\x08\0\0\0\x08\0\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\x02\0\0\0\xfa\0\
|
||||
\0\0\0\0\0\0\x20\0\0\0\x08\0\0\0\x2a\x01\0\0\0\0\0\0\x70\0\0\0\x0d\0\0\0\x3e\0\
|
||||
\0\0\0\0\0\0\x80\0\0\0\x0d\0\0\0\xfa\0\0\0\0\0\0\0\xa0\0\0\0\x0d\0\0\0\x2a\x01\
|
||||
\0\0\0\0\0\0\x62\x02\0\0\x12\0\0\0\0\0\0\0\x14\0\0\0\x3e\0\0\0\0\0\0\0\x08\0\0\
|
||||
\0\x08\0\0\0\x3e\0\0\0\0\0\0\0\x10\0\0\0\x14\0\0\0\xfa\0\0\0\0\0\0\0\x20\0\0\0\
|
||||
\x18\0\0\0\x3e\0\0\0\0\0\0\0\x28\0\0\0\x08\0\0\0\x2a\x01\0\0\0\0\0\0\x80\0\0\0\
|
||||
\x1a\0\0\0\x3e\0\0\0\0\0\0\0\x90\0\0\0\x1a\0\0\0\xfa\0\0\0\0\0\0\0\xa8\0\0\0\
|
||||
\x1a\0\0\0\x62\x03\0\0\0\0\0\0\xb0\0\0\0\x1a\0\0\0\x66\x03\0\0\0\0\0\0\xc0\0\0\
|
||||
\0\x1f\0\0\0\x94\x03\0\0\0\0\0\0\xd8\0\0\0\x20\0\0\0\xfa\0\0\0\0\0\0\0\xf0\0\0\
|
||||
\0\x20\0\0\0\x3e\0\0\0\0\0\0\0\x18\x01\0\0\x24\0\0\0\x3e\0\0\0\0\0\0\0\x50\x01\
|
||||
\0\0\x1a\0\0\0\xfa\0\0\0\0\0\0\0\x60\x01\0\0\x20\0\0\0\x56\x04\0\0\0\0\0\0\x88\
|
||||
\x01\0\0\x1a\0\0\0\x2a\x01\0\0\0\0\0\0\x98\x01\0\0\x1a\0\0\0\x97\x04\0\0\0\0\0\
|
||||
\0\xa0\x01\0\0\x18\0\0\0\x3e\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\x91\0\0\0\x04\0\xf1\xff\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xe6\0\0\
|
||||
\0\0\0\x02\0\x70\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xd8\0\0\0\0\0\x02\0\xf0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\0\0\0\0\xdf\0\0\0\0\0\x03\0\x78\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\xd1\0\0\0\0\0\x03\0\x80\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xca\0\0\0\0\0\x03\0\
|
||||
\xf8\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x14\0\0\0\x01\0\x04\0\0\0\0\0\0\0\0\0\x23\
|
||||
\0\0\0\0\0\0\0\x04\x01\0\0\x01\0\x04\0\x23\0\0\0\0\0\0\0\x0e\0\0\0\0\0\0\0\x28\
|
||||
\0\0\0\x01\0\x04\0\x31\0\0\0\0\0\0\0\x20\0\0\0\0\0\0\0\xed\0\0\0\x01\0\x04\0\
|
||||
\x51\0\0\0\0\0\0\0\x11\0\0\0\0\0\0\0\0\0\0\0\x03\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\0\0\x03\0\x03\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x03\0\
|
||||
\x04\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xc2\0\0\0\x11\0\x05\0\0\0\0\0\0\0\0\0\
|
||||
\x04\0\0\0\0\0\0\0\x3d\0\0\0\x12\0\x02\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\x5b\
|
||||
\0\0\0\x12\0\x03\0\0\0\0\0\0\0\0\0\x08\x02\0\0\0\0\0\0\x48\0\0\0\0\0\0\0\x01\0\
|
||||
\0\0\x0d\0\0\0\xc8\0\0\0\0\0\0\0\x01\0\0\0\x0d\0\0\0\x50\0\0\0\0\0\0\0\x01\0\0\
|
||||
\0\x0d\0\0\0\xd0\x01\0\0\0\0\0\0\x01\0\0\0\x0d\0\0\0\xf0\x03\0\0\0\0\0\0\x0a\0\
|
||||
\0\0\x0d\0\0\0\xfc\x03\0\0\0\0\0\0\x0a\0\0\0\x0d\0\0\0\x08\x04\0\0\0\0\0\0\x0a\
|
||||
\0\0\0\x0d\0\0\0\x14\x04\0\0\0\0\0\0\x0a\0\0\0\x0d\0\0\0\x2c\x04\0\0\0\0\0\0\0\
|
||||
\0\0\0\x0e\0\0\0\x2c\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x3c\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x50\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x60\0\0\0\0\0\0\0\0\0\0\0\x0b\0\
|
||||
\0\0\x70\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\
|
||||
\x90\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xa0\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xb0\0\
|
||||
\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xc0\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xd0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\x0b\0\0\0\xe8\0\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xf8\0\0\0\0\0\0\0\
|
||||
\0\0\0\0\x0c\0\0\0\x08\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x18\x01\0\0\0\0\0\0\0\
|
||||
\0\0\0\x0c\0\0\0\x28\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x38\x01\0\0\0\0\0\0\0\0\
|
||||
\0\0\x0c\0\0\0\x48\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x58\x01\0\0\0\0\0\0\0\0\0\
|
||||
\0\x0c\0\0\0\x68\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x78\x01\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x88\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x98\x01\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\xa8\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xb8\x01\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\xc8\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xd8\x01\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\xe8\x01\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xf8\x01\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x08\x02\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x18\x02\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x28\x02\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x38\x02\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x48\x02\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x58\x02\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x68\x02\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x78\x02\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x94\x02\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xa4\x02\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0b\0\0\0\xb4\x02\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xc4\x02\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0b\0\0\0\xd4\x02\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\xe4\x02\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0b\0\0\0\xf4\x02\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x0c\x03\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x1c\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x2c\x03\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x3c\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x4c\x03\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x5c\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x6c\x03\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x7c\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x8c\x03\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x9c\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xac\x03\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\xbc\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xcc\x03\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\xdc\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\xec\x03\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\xfc\x03\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x0c\x04\0\0\0\0\0\0\0\0\0\0\
|
||||
\x0c\0\0\0\x1c\x04\0\0\0\0\0\0\0\0\0\0\x0c\0\0\0\x4d\x4e\x40\x41\x42\x43\x4c\0\
|
||||
\x2e\x74\x65\x78\x74\0\x2e\x72\x65\x6c\x2e\x42\x54\x46\x2e\x65\x78\x74\0\x64\
|
||||
\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\0\x64\
|
||||
\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\0\
|
||||
\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\0\x2e\x72\x65\x6c\x69\x74\x65\
|
||||
\x72\x2f\x62\x70\x66\x5f\x6d\x61\x70\0\x64\x75\x6d\x70\x5f\x62\x70\x66\x5f\x70\
|
||||
\x72\x6f\x67\0\x2e\x72\x65\x6c\x69\x74\x65\x72\x2f\x62\x70\x66\x5f\x70\x72\x6f\
|
||||
\x67\0\x2e\x6c\x6c\x76\x6d\x5f\x61\x64\x64\x72\x73\x69\x67\0\x6c\x69\x63\x65\
|
||||
\x6e\x73\x65\0\x69\x74\x65\x72\x61\x74\x6f\x72\x73\x2e\x62\x70\x66\x2e\x63\0\
|
||||
\x2e\x73\x74\x72\x74\x61\x62\0\x2e\x73\x79\x6d\x74\x61\x62\0\x2e\x72\x6f\x64\
|
||||
\x61\x74\x61\0\x2e\x72\x65\x6c\x2e\x42\x54\x46\0\x4c\x49\x43\x45\x4e\x53\x45\0\
|
||||
\x4c\x42\x42\x31\x5f\x37\0\x4c\x42\x42\x31\x5f\x36\0\x4c\x42\x42\x30\x5f\x34\0\
|
||||
\x4c\x42\x42\x31\x5f\x33\0\x4c\x42\x42\x30\x5f\x33\0\x64\x75\x6d\x70\x5f\x62\
|
||||
\x70\x66\x5f\x70\x72\x6f\x67\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x32\0\x64\x75\x6d\
|
||||
\x70\x5f\x62\x70\x66\x5f\x6d\x61\x70\x2e\x5f\x5f\x5f\x66\x6d\x74\x2e\x31\0\0\0\
|
||||
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\x01\0\0\
|
||||
\0\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\0\0\0\0\x04\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x4e\0\0\0\x01\0\0\0\x06\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\0\0\x40\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x08\0\0\
|
||||
\0\0\0\0\0\0\0\0\0\0\0\0\0\x6d\0\0\0\x01\0\0\0\x06\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\0\x40\x01\0\0\0\0\0\0\x08\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x08\0\0\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\xb1\0\0\0\x01\0\0\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x48\x03\0\
|
||||
\0\0\0\0\0\x62\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\x89\0\0\0\x01\0\0\0\x03\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xaa\x03\0\0\0\0\0\0\x04\
|
||||
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xbd\0\0\0\x01\
|
||||
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xae\x03\0\0\0\0\0\0\x3d\x09\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x0b\0\0\0\x01\0\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\0\0\0\0\0\xeb\x0c\0\0\0\0\0\0\x2c\x04\0\0\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xa9\0\0\0\x02\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\x18\x11\0\0\0\0\0\0\x98\x01\0\0\0\0\0\0\x0e\0\0\0\x0e\0\0\0\x08\0\0\
|
||||
\0\0\0\0\0\x18\0\0\0\0\0\0\0\x4a\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\
|
||||
\0\xb0\x12\0\0\0\0\0\0\x20\0\0\0\0\0\0\0\x08\0\0\0\x02\0\0\0\x08\0\0\0\0\0\0\0\
|
||||
\x10\0\0\0\0\0\0\0\x69\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xd0\x12\
|
||||
\0\0\0\0\0\0\x20\0\0\0\0\0\0\0\x08\0\0\0\x03\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\
|
||||
\0\0\0\0\xb9\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xf0\x12\0\0\0\0\0\
|
||||
\0\x50\0\0\0\0\0\0\0\x08\0\0\0\x06\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\
|
||||
\x07\0\0\0\x09\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x40\x13\0\0\0\0\0\0\xe0\
|
||||
\x03\0\0\0\0\0\0\x08\0\0\0\x07\0\0\0\x08\0\0\0\0\0\0\0\x10\0\0\0\0\0\0\0\x7b\0\
|
||||
\0\0\x03\x4c\xff\x6f\0\0\0\x80\0\0\0\0\0\0\0\0\0\0\0\0\x20\x17\0\0\0\0\0\0\x07\
|
||||
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\xa1\0\0\0\x03\
|
||||
\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\x27\x17\0\0\0\0\0\0\x1a\x01\0\0\0\0\0\0\
|
||||
\0\0\0\0\0\0\0\0\x01\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0";
|
||||
|
||||
return 0;
|
||||
err:
|
||||
bpf_object__destroy_skeleton(s);
|
||||
return -1;
|
||||
}
|
||||
|
||||
#endif /* __ITERATORS_BPF_SKEL_H__ */
|
@@ -257,6 +257,7 @@ static int queue_stack_map_get_next_key(struct bpf_map *map, void *key,
|
||||
|
||||
static int queue_map_btf_id;
|
||||
const struct bpf_map_ops queue_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc_check = queue_stack_map_alloc_check,
|
||||
.map_alloc = queue_stack_map_alloc,
|
||||
.map_free = queue_stack_map_free,
|
||||
@@ -273,6 +274,7 @@ const struct bpf_map_ops queue_map_ops = {
|
||||
|
||||
static int stack_map_btf_id;
|
||||
const struct bpf_map_ops stack_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc_check = queue_stack_map_alloc_check,
|
||||
.map_alloc = queue_stack_map_alloc,
|
||||
.map_free = queue_stack_map_free,
|
||||
|
@@ -191,7 +191,7 @@ int bpf_fd_reuseport_array_lookup_elem(struct bpf_map *map, void *key,
|
||||
rcu_read_lock();
|
||||
sk = reuseport_array_lookup_elem(map, key);
|
||||
if (sk) {
|
||||
*(u64 *)value = sock_gen_cookie(sk);
|
||||
*(u64 *)value = __sock_gen_cookie(sk);
|
||||
err = 0;
|
||||
} else {
|
||||
err = -ENOENT;
|
||||
@@ -351,6 +351,7 @@ static int reuseport_array_get_next_key(struct bpf_map *map, void *key,
|
||||
|
||||
static int reuseport_array_map_btf_id;
|
||||
const struct bpf_map_ops reuseport_array_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc_check = reuseport_array_alloc_check,
|
||||
.map_alloc = reuseport_array_alloc,
|
||||
.map_free = reuseport_array_free,
|
||||
|
@@ -287,6 +287,7 @@ static __poll_t ringbuf_map_poll(struct bpf_map *map, struct file *filp,
|
||||
|
||||
static int ringbuf_map_btf_id;
|
||||
const struct bpf_map_ops ringbuf_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc = ringbuf_map_alloc,
|
||||
.map_free = ringbuf_map_free,
|
||||
.map_mmap = ringbuf_map_mmap,
|
||||
|
@@ -665,18 +665,17 @@ BPF_CALL_4(bpf_get_task_stack, struct task_struct *, task, void *, buf,
|
||||
return __bpf_get_stack(regs, task, NULL, buf, size, flags);
|
||||
}
|
||||
|
||||
BTF_ID_LIST(bpf_get_task_stack_btf_ids)
|
||||
BTF_ID(struct, task_struct)
|
||||
BTF_ID_LIST_SINGLE(bpf_get_task_stack_btf_ids, struct, task_struct)
|
||||
|
||||
const struct bpf_func_proto bpf_get_task_stack_proto = {
|
||||
.func = bpf_get_task_stack,
|
||||
.gpl_only = false,
|
||||
.ret_type = RET_INTEGER,
|
||||
.arg1_type = ARG_PTR_TO_BTF_ID,
|
||||
.arg1_btf_id = &bpf_get_task_stack_btf_ids[0],
|
||||
.arg2_type = ARG_PTR_TO_UNINIT_MEM,
|
||||
.arg3_type = ARG_CONST_SIZE_OR_ZERO,
|
||||
.arg4_type = ARG_ANYTHING,
|
||||
.btf_id = bpf_get_task_stack_btf_ids,
|
||||
};
|
||||
|
||||
BPF_CALL_4(bpf_get_stack_pe, struct bpf_perf_event_data_kern *, ctx,
|
||||
@@ -839,6 +838,7 @@ static void stack_map_free(struct bpf_map *map)
|
||||
|
||||
static int stack_trace_map_btf_id;
|
||||
const struct bpf_map_ops stack_trace_map_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc = stack_map_alloc,
|
||||
.map_free = stack_map_free,
|
||||
.map_get_next_key = stack_map_get_next_key,
|
||||
|
@@ -4,6 +4,7 @@
|
||||
#include <linux/bpf.h>
|
||||
#include <linux/bpf_trace.h>
|
||||
#include <linux/bpf_lirc.h>
|
||||
#include <linux/bpf_verifier.h>
|
||||
#include <linux/btf.h>
|
||||
#include <linux/syscalls.h>
|
||||
#include <linux/slab.h>
|
||||
@@ -29,6 +30,7 @@
|
||||
#include <linux/bpf_lsm.h>
|
||||
#include <linux/poll.h>
|
||||
#include <linux/bpf-netns.h>
|
||||
#include <linux/rcupdate_trace.h>
|
||||
|
||||
#define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \
|
||||
(map)->map_type == BPF_MAP_TYPE_CGROUP_ARRAY || \
|
||||
@@ -90,6 +92,7 @@ int bpf_check_uarg_tail_zero(void __user *uaddr,
|
||||
}
|
||||
|
||||
const struct bpf_map_ops bpf_map_offload_ops = {
|
||||
.map_meta_equal = bpf_map_meta_equal,
|
||||
.map_alloc = bpf_map_offload_map_alloc,
|
||||
.map_free = bpf_map_offload_map_free,
|
||||
.map_check_btf = map_check_no_btf,
|
||||
@@ -157,10 +160,11 @@ static int bpf_map_update_value(struct bpf_map *map, struct fd f, void *key,
|
||||
if (bpf_map_is_dev_bound(map)) {
|
||||
return bpf_map_offload_update_elem(map, key, value, flags);
|
||||
} else if (map->map_type == BPF_MAP_TYPE_CPUMAP ||
|
||||
map->map_type == BPF_MAP_TYPE_SOCKHASH ||
|
||||
map->map_type == BPF_MAP_TYPE_SOCKMAP ||
|
||||
map->map_type == BPF_MAP_TYPE_STRUCT_OPS) {
|
||||
return map->ops->map_update_elem(map, key, value, flags);
|
||||
} else if (map->map_type == BPF_MAP_TYPE_SOCKHASH ||
|
||||
map->map_type == BPF_MAP_TYPE_SOCKMAP) {
|
||||
return sock_map_update_elem_sys(map, key, value, flags);
|
||||
} else if (IS_FD_PROG_ARRAY(map)) {
|
||||
return bpf_fd_array_map_update_elem(map, f.file, key, value,
|
||||
flags);
|
||||
@@ -768,7 +772,8 @@ static int map_check_btf(struct bpf_map *map, const struct btf *btf,
|
||||
if (map->map_type != BPF_MAP_TYPE_HASH &&
|
||||
map->map_type != BPF_MAP_TYPE_ARRAY &&
|
||||
map->map_type != BPF_MAP_TYPE_CGROUP_STORAGE &&
|
||||
map->map_type != BPF_MAP_TYPE_SK_STORAGE)
|
||||
map->map_type != BPF_MAP_TYPE_SK_STORAGE &&
|
||||
map->map_type != BPF_MAP_TYPE_INODE_STORAGE)
|
||||
return -ENOTSUPP;
|
||||
if (map->spin_lock_off + sizeof(struct bpf_spin_lock) >
|
||||
map->value_size) {
|
||||
@@ -1728,10 +1733,14 @@ static void __bpf_prog_put_noref(struct bpf_prog *prog, bool deferred)
|
||||
btf_put(prog->aux->btf);
|
||||
bpf_prog_free_linfo(prog);
|
||||
|
||||
if (deferred)
|
||||
call_rcu(&prog->aux->rcu, __bpf_prog_put_rcu);
|
||||
else
|
||||
if (deferred) {
|
||||
if (prog->aux->sleepable)
|
||||
call_rcu_tasks_trace(&prog->aux->rcu, __bpf_prog_put_rcu);
|
||||
else
|
||||
call_rcu(&prog->aux->rcu, __bpf_prog_put_rcu);
|
||||
} else {
|
||||
__bpf_prog_put_rcu(&prog->aux->rcu);
|
||||
}
|
||||
}
|
||||
|
||||
static void __bpf_prog_put(struct bpf_prog *prog, bool do_idr_lock)
|
||||
@@ -2101,6 +2110,7 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
|
||||
if (attr->prog_flags & ~(BPF_F_STRICT_ALIGNMENT |
|
||||
BPF_F_ANY_ALIGNMENT |
|
||||
BPF_F_TEST_STATE_FREQ |
|
||||
BPF_F_SLEEPABLE |
|
||||
BPF_F_TEST_RND_HI32))
|
||||
return -EINVAL;
|
||||
|
||||
@@ -2145,17 +2155,18 @@ static int bpf_prog_load(union bpf_attr *attr, union bpf_attr __user *uattr)
|
||||
prog->expected_attach_type = attr->expected_attach_type;
|
||||
prog->aux->attach_btf_id = attr->attach_btf_id;
|
||||
if (attr->attach_prog_fd) {
|
||||
struct bpf_prog *tgt_prog;
|
||||
struct bpf_prog *dst_prog;
|
||||
|
||||
tgt_prog = bpf_prog_get(attr->attach_prog_fd);
|
||||
if (IS_ERR(tgt_prog)) {
|
||||
err = PTR_ERR(tgt_prog);
|
||||
dst_prog = bpf_prog_get(attr->attach_prog_fd);
|
||||
if (IS_ERR(dst_prog)) {
|
||||
err = PTR_ERR(dst_prog);
|
||||
goto free_prog_nouncharge;
|
||||
}
|
||||
prog->aux->linked_prog = tgt_prog;
|
||||
prog->aux->dst_prog = dst_prog;
|
||||
}
|
||||
|
||||
prog->aux->offload_requested = !!attr->prog_ifindex;
|
||||
prog->aux->sleepable = attr->prog_flags & BPF_F_SLEEPABLE;
|
||||
|
||||
err = security_bpf_prog_alloc(prog->aux);
|
||||
if (err)
|
||||
@@ -2488,11 +2499,23 @@ struct bpf_link *bpf_link_get_from_fd(u32 ufd)
|
||||
struct bpf_tracing_link {
|
||||
struct bpf_link link;
|
||||
enum bpf_attach_type attach_type;
|
||||
struct bpf_trampoline *trampoline;
|
||||
struct bpf_prog *tgt_prog;
|
||||
};
|
||||
|
||||
static void bpf_tracing_link_release(struct bpf_link *link)
|
||||
{
|
||||
WARN_ON_ONCE(bpf_trampoline_unlink_prog(link->prog));
|
||||
struct bpf_tracing_link *tr_link =
|
||||
container_of(link, struct bpf_tracing_link, link);
|
||||
|
||||
WARN_ON_ONCE(bpf_trampoline_unlink_prog(link->prog,
|
||||
tr_link->trampoline));
|
||||
|
||||
bpf_trampoline_put(tr_link->trampoline);
|
||||
|
||||
/* tgt_prog is NULL if target is a kernel function */
|
||||
if (tr_link->tgt_prog)
|
||||
bpf_prog_put(tr_link->tgt_prog);
|
||||
}
|
||||
|
||||
static void bpf_tracing_link_dealloc(struct bpf_link *link)
|
||||
@@ -2532,10 +2555,15 @@ static const struct bpf_link_ops bpf_tracing_link_lops = {
|
||||
.fill_link_info = bpf_tracing_link_fill_link_info,
|
||||
};
|
||||
|
||||
static int bpf_tracing_prog_attach(struct bpf_prog *prog)
|
||||
static int bpf_tracing_prog_attach(struct bpf_prog *prog,
|
||||
int tgt_prog_fd,
|
||||
u32 btf_id)
|
||||
{
|
||||
struct bpf_link_primer link_primer;
|
||||
struct bpf_prog *tgt_prog = NULL;
|
||||
struct bpf_trampoline *tr = NULL;
|
||||
struct bpf_tracing_link *link;
|
||||
u64 key = 0;
|
||||
int err;
|
||||
|
||||
switch (prog->type) {
|
||||
@@ -2564,6 +2592,28 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog)
|
||||
goto out_put_prog;
|
||||
}
|
||||
|
||||
if (!!tgt_prog_fd != !!btf_id) {
|
||||
err = -EINVAL;
|
||||
goto out_put_prog;
|
||||
}
|
||||
|
||||
if (tgt_prog_fd) {
|
||||
/* For now we only allow new targets for BPF_PROG_TYPE_EXT */
|
||||
if (prog->type != BPF_PROG_TYPE_EXT) {
|
||||
err = -EINVAL;
|
||||
goto out_put_prog;
|
||||
}
|
||||
|
||||
tgt_prog = bpf_prog_get(tgt_prog_fd);
|
||||
if (IS_ERR(tgt_prog)) {
|
||||
err = PTR_ERR(tgt_prog);
|
||||
tgt_prog = NULL;
|
||||
goto out_put_prog;
|
||||
}
|
||||
|
||||
key = bpf_trampoline_compute_key(tgt_prog, btf_id);
|
||||
}
|
||||
|
||||
link = kzalloc(sizeof(*link), GFP_USER);
|
||||
if (!link) {
|
||||
err = -ENOMEM;
|
||||
@@ -2573,20 +2623,100 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog)
|
||||
&bpf_tracing_link_lops, prog);
|
||||
link->attach_type = prog->expected_attach_type;
|
||||
|
||||
err = bpf_link_prime(&link->link, &link_primer);
|
||||
if (err) {
|
||||
kfree(link);
|
||||
goto out_put_prog;
|
||||
mutex_lock(&prog->aux->dst_mutex);
|
||||
|
||||
/* There are a few possible cases here:
|
||||
*
|
||||
* - if prog->aux->dst_trampoline is set, the program was just loaded
|
||||
* and not yet attached to anything, so we can use the values stored
|
||||
* in prog->aux
|
||||
*
|
||||
* - if prog->aux->dst_trampoline is NULL, the program has already been
|
||||
* attached to a target and its initial target was cleared (below)
|
||||
*
|
||||
* - if tgt_prog != NULL, the caller specified tgt_prog_fd +
|
||||
* target_btf_id using the link_create API.
|
||||
*
|
||||
* - if tgt_prog == NULL when this function was called using the old
|
||||
* raw_tracepoint_open API, and we need a target from prog->aux
|
||||
*
|
||||
* The combination of no saved target in prog->aux, and no target
|
||||
* specified on load is illegal, and we reject that here.
|
||||
*/
|
||||
if (!prog->aux->dst_trampoline && !tgt_prog) {
|
||||
err = -ENOENT;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
err = bpf_trampoline_link_prog(prog);
|
||||
if (!prog->aux->dst_trampoline ||
|
||||
(key && key != prog->aux->dst_trampoline->key)) {
|
||||
/* If there is no saved target, or the specified target is
|
||||
* different from the destination specified at load time, we
|
||||
* need a new trampoline and a check for compatibility
|
||||
*/
|
||||
struct bpf_attach_target_info tgt_info = {};
|
||||
|
||||
err = bpf_check_attach_target(NULL, prog, tgt_prog, btf_id,
|
||||
&tgt_info);
|
||||
if (err)
|
||||
goto out_unlock;
|
||||
|
||||
tr = bpf_trampoline_get(key, &tgt_info);
|
||||
if (!tr) {
|
||||
err = -ENOMEM;
|
||||
goto out_unlock;
|
||||
}
|
||||
} else {
|
||||
/* The caller didn't specify a target, or the target was the
|
||||
* same as the destination supplied during program load. This
|
||||
* means we can reuse the trampoline and reference from program
|
||||
* load time, and there is no need to allocate a new one. This
|
||||
* can only happen once for any program, as the saved values in
|
||||
* prog->aux are cleared below.
|
||||
*/
|
||||
tr = prog->aux->dst_trampoline;
|
||||
tgt_prog = prog->aux->dst_prog;
|
||||
}
|
||||
|
||||
err = bpf_link_prime(&link->link, &link_primer);
|
||||
if (err)
|
||||
goto out_unlock;
|
||||
|
||||
err = bpf_trampoline_link_prog(prog, tr);
|
||||
if (err) {
|
||||
bpf_link_cleanup(&link_primer);
|
||||
goto out_put_prog;
|
||||
link = NULL;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
link->tgt_prog = tgt_prog;
|
||||
link->trampoline = tr;
|
||||
|
||||
/* Always clear the trampoline and target prog from prog->aux to make
|
||||
* sure the original attach destination is not kept alive after a
|
||||
* program is (re-)attached to another target.
|
||||
*/
|
||||
if (prog->aux->dst_prog &&
|
||||
(tgt_prog_fd || tr != prog->aux->dst_trampoline))
|
||||
/* got extra prog ref from syscall, or attaching to different prog */
|
||||
bpf_prog_put(prog->aux->dst_prog);
|
||||
if (prog->aux->dst_trampoline && tr != prog->aux->dst_trampoline)
|
||||
/* we allocated a new trampoline, so free the old one */
|
||||
bpf_trampoline_put(prog->aux->dst_trampoline);
|
||||
|
||||
prog->aux->dst_prog = NULL;
|
||||
prog->aux->dst_trampoline = NULL;
|
||||
mutex_unlock(&prog->aux->dst_mutex);
|
||||
|
||||
return bpf_link_settle(&link_primer);
|
||||
out_unlock:
|
||||
if (tr && tr != prog->aux->dst_trampoline)
|
||||
bpf_trampoline_put(tr);
|
||||
mutex_unlock(&prog->aux->dst_mutex);
|
||||
kfree(link);
|
||||
out_put_prog:
|
||||
if (tgt_prog_fd && tgt_prog)
|
||||
bpf_prog_put(tgt_prog);
|
||||
bpf_prog_put(prog);
|
||||
return err;
|
||||
}
|
||||
@@ -2700,7 +2830,7 @@ static int bpf_raw_tracepoint_open(const union bpf_attr *attr)
|
||||
tp_name = prog->aux->attach_func_name;
|
||||
break;
|
||||
}
|
||||
return bpf_tracing_prog_attach(prog);
|
||||
return bpf_tracing_prog_attach(prog, 0, 0);
|
||||
case BPF_PROG_TYPE_RAW_TRACEPOINT:
|
||||
case BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE:
|
||||
if (strncpy_from_user(buf,
|
||||
@@ -2969,7 +3099,7 @@ static int bpf_prog_query(const union bpf_attr *attr,
|
||||
}
|
||||
}
|
||||
|
||||
#define BPF_PROG_TEST_RUN_LAST_FIELD test.ctx_out
|
||||
#define BPF_PROG_TEST_RUN_LAST_FIELD test.cpu
|
||||
|
||||
static int bpf_prog_test_run(const union bpf_attr *attr,
|
||||
union bpf_attr __user *uattr)
|
||||
@@ -3152,21 +3282,25 @@ static const struct bpf_map *bpf_map_from_imm(const struct bpf_prog *prog,
|
||||
const struct bpf_map *map;
|
||||
int i;
|
||||
|
||||
mutex_lock(&prog->aux->used_maps_mutex);
|
||||
for (i = 0, *off = 0; i < prog->aux->used_map_cnt; i++) {
|
||||
map = prog->aux->used_maps[i];
|
||||
if (map == (void *)addr) {
|
||||
*type = BPF_PSEUDO_MAP_FD;
|
||||
return map;
|
||||
goto out;
|
||||
}
|
||||
if (!map->ops->map_direct_value_meta)
|
||||
continue;
|
||||
if (!map->ops->map_direct_value_meta(map, addr, off)) {
|
||||
*type = BPF_PSEUDO_MAP_VALUE;
|
||||
return map;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
map = NULL;
|
||||
|
||||
return NULL;
|
||||
out:
|
||||
mutex_unlock(&prog->aux->used_maps_mutex);
|
||||
return map;
|
||||
}
|
||||
|
||||
static struct bpf_insn *bpf_insn_prepare_dump(const struct bpf_prog *prog,
|
||||
@@ -3284,6 +3418,7 @@ static int bpf_prog_get_info_by_fd(struct file *file,
|
||||
memcpy(info.tag, prog->tag, sizeof(prog->tag));
|
||||
memcpy(info.name, prog->aux->name, sizeof(prog->aux->name));
|
||||
|
||||
mutex_lock(&prog->aux->used_maps_mutex);
|
||||
ulen = info.nr_map_ids;
|
||||
info.nr_map_ids = prog->aux->used_map_cnt;
|
||||
ulen = min_t(u32, info.nr_map_ids, ulen);
|
||||
@@ -3293,9 +3428,12 @@ static int bpf_prog_get_info_by_fd(struct file *file,
|
||||
|
||||
for (i = 0; i < ulen; i++)
|
||||
if (put_user(prog->aux->used_maps[i]->id,
|
||||
&user_map_ids[i]))
|
||||
&user_map_ids[i])) {
|
||||
mutex_unlock(&prog->aux->used_maps_mutex);
|
||||
return -EFAULT;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&prog->aux->used_maps_mutex);
|
||||
|
||||
err = set_info_rec_size(&info);
|
||||
if (err)
|
||||
@@ -3876,10 +4014,15 @@ err_put:
|
||||
|
||||
static int tracing_bpf_link_attach(const union bpf_attr *attr, struct bpf_prog *prog)
|
||||
{
|
||||
if (attr->link_create.attach_type == BPF_TRACE_ITER &&
|
||||
prog->expected_attach_type == BPF_TRACE_ITER)
|
||||
return bpf_iter_link_attach(attr, prog);
|
||||
if (attr->link_create.attach_type != prog->expected_attach_type)
|
||||
return -EINVAL;
|
||||
|
||||
if (prog->expected_attach_type == BPF_TRACE_ITER)
|
||||
return bpf_iter_link_attach(attr, prog);
|
||||
else if (prog->type == BPF_PROG_TYPE_EXT)
|
||||
return bpf_tracing_prog_attach(prog,
|
||||
attr->link_create.target_fd,
|
||||
attr->link_create.target_btf_id);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
@@ -3893,18 +4036,25 @@ static int link_create(union bpf_attr *attr)
|
||||
if (CHECK_ATTR(BPF_LINK_CREATE))
|
||||
return -EINVAL;
|
||||
|
||||
ptype = attach_type_to_prog_type(attr->link_create.attach_type);
|
||||
if (ptype == BPF_PROG_TYPE_UNSPEC)
|
||||
return -EINVAL;
|
||||
|
||||
prog = bpf_prog_get_type(attr->link_create.prog_fd, ptype);
|
||||
prog = bpf_prog_get(attr->link_create.prog_fd);
|
||||
if (IS_ERR(prog))
|
||||
return PTR_ERR(prog);
|
||||
|
||||
ret = bpf_prog_attach_check_attach_type(prog,
|
||||
attr->link_create.attach_type);
|
||||
if (ret)
|
||||
goto err_out;
|
||||
goto out;
|
||||
|
||||
if (prog->type == BPF_PROG_TYPE_EXT) {
|
||||
ret = tracing_bpf_link_attach(attr, prog);
|
||||
goto out;
|
||||
}
|
||||
|
||||
ptype = attach_type_to_prog_type(attr->link_create.attach_type);
|
||||
if (ptype == BPF_PROG_TYPE_UNSPEC || ptype != prog->type) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
|
||||
switch (ptype) {
|
||||
case BPF_PROG_TYPE_CGROUP_SKB:
|
||||
@@ -3932,7 +4082,7 @@ static int link_create(union bpf_attr *attr)
|
||||
ret = -EINVAL;
|
||||
}
|
||||
|
||||
err_out:
|
||||
out:
|
||||
if (ret < 0)
|
||||
bpf_prog_put(prog);
|
||||
return ret;
|
||||
@@ -4014,9 +4164,31 @@ static int link_detach(union bpf_attr *attr)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int bpf_link_inc_not_zero(struct bpf_link *link)
|
||||
static struct bpf_link *bpf_link_inc_not_zero(struct bpf_link *link)
|
||||
{
|
||||
return atomic64_fetch_add_unless(&link->refcnt, 1, 0) ? 0 : -ENOENT;
|
||||
return atomic64_fetch_add_unless(&link->refcnt, 1, 0) ? link : ERR_PTR(-ENOENT);
|
||||
}
|
||||
|
||||
struct bpf_link *bpf_link_by_id(u32 id)
|
||||
{
|
||||
struct bpf_link *link;
|
||||
|
||||
if (!id)
|
||||
return ERR_PTR(-ENOENT);
|
||||
|
||||
spin_lock_bh(&link_idr_lock);
|
||||
/* before link is "settled", ID is 0, pretend it doesn't exist yet */
|
||||
link = idr_find(&link_idr, id);
|
||||
if (link) {
|
||||
if (link->id)
|
||||
link = bpf_link_inc_not_zero(link);
|
||||
else
|
||||
link = ERR_PTR(-EAGAIN);
|
||||
} else {
|
||||
link = ERR_PTR(-ENOENT);
|
||||
}
|
||||
spin_unlock_bh(&link_idr_lock);
|
||||
return link;
|
||||
}
|
||||
|
||||
#define BPF_LINK_GET_FD_BY_ID_LAST_FIELD link_id
|
||||
@@ -4025,7 +4197,7 @@ static int bpf_link_get_fd_by_id(const union bpf_attr *attr)
|
||||
{
|
||||
struct bpf_link *link;
|
||||
u32 id = attr->link_id;
|
||||
int fd, err;
|
||||
int fd;
|
||||
|
||||
if (CHECK_ATTR(BPF_LINK_GET_FD_BY_ID))
|
||||
return -EINVAL;
|
||||
@@ -4033,21 +4205,9 @@ static int bpf_link_get_fd_by_id(const union bpf_attr *attr)
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EPERM;
|
||||
|
||||
spin_lock_bh(&link_idr_lock);
|
||||
link = idr_find(&link_idr, id);
|
||||
/* before link is "settled", ID is 0, pretend it doesn't exist yet */
|
||||
if (link) {
|
||||
if (link->id)
|
||||
err = bpf_link_inc_not_zero(link);
|
||||
else
|
||||
err = -EAGAIN;
|
||||
} else {
|
||||
err = -ENOENT;
|
||||
}
|
||||
spin_unlock_bh(&link_idr_lock);
|
||||
|
||||
if (err)
|
||||
return err;
|
||||
link = bpf_link_by_id(id);
|
||||
if (IS_ERR(link))
|
||||
return PTR_ERR(link);
|
||||
|
||||
fd = bpf_link_new_fd(link);
|
||||
if (fd < 0)
|
||||
@@ -4133,6 +4293,68 @@ static int bpf_iter_create(union bpf_attr *attr)
|
||||
return err;
|
||||
}
|
||||
|
||||
#define BPF_PROG_BIND_MAP_LAST_FIELD prog_bind_map.flags
|
||||
|
||||
static int bpf_prog_bind_map(union bpf_attr *attr)
|
||||
{
|
||||
struct bpf_prog *prog;
|
||||
struct bpf_map *map;
|
||||
struct bpf_map **used_maps_old, **used_maps_new;
|
||||
int i, ret = 0;
|
||||
|
||||
if (CHECK_ATTR(BPF_PROG_BIND_MAP))
|
||||
return -EINVAL;
|
||||
|
||||
if (attr->prog_bind_map.flags)
|
||||
return -EINVAL;
|
||||
|
||||
prog = bpf_prog_get(attr->prog_bind_map.prog_fd);
|
||||
if (IS_ERR(prog))
|
||||
return PTR_ERR(prog);
|
||||
|
||||
map = bpf_map_get(attr->prog_bind_map.map_fd);
|
||||
if (IS_ERR(map)) {
|
||||
ret = PTR_ERR(map);
|
||||
goto out_prog_put;
|
||||
}
|
||||
|
||||
mutex_lock(&prog->aux->used_maps_mutex);
|
||||
|
||||
used_maps_old = prog->aux->used_maps;
|
||||
|
||||
for (i = 0; i < prog->aux->used_map_cnt; i++)
|
||||
if (used_maps_old[i] == map) {
|
||||
bpf_map_put(map);
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
used_maps_new = kmalloc_array(prog->aux->used_map_cnt + 1,
|
||||
sizeof(used_maps_new[0]),
|
||||
GFP_KERNEL);
|
||||
if (!used_maps_new) {
|
||||
ret = -ENOMEM;
|
||||
goto out_unlock;
|
||||
}
|
||||
|
||||
memcpy(used_maps_new, used_maps_old,
|
||||
sizeof(used_maps_old[0]) * prog->aux->used_map_cnt);
|
||||
used_maps_new[prog->aux->used_map_cnt] = map;
|
||||
|
||||
prog->aux->used_map_cnt++;
|
||||
prog->aux->used_maps = used_maps_new;
|
||||
|
||||
kfree(used_maps_old);
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&prog->aux->used_maps_mutex);
|
||||
|
||||
if (ret)
|
||||
bpf_map_put(map);
|
||||
out_prog_put:
|
||||
bpf_prog_put(prog);
|
||||
return ret;
|
||||
}
|
||||
|
||||
SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, size)
|
||||
{
|
||||
union bpf_attr attr;
|
||||
@@ -4266,6 +4488,9 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
|
||||
case BPF_LINK_DETACH:
|
||||
err = link_detach(&attr);
|
||||
break;
|
||||
case BPF_PROG_BIND_MAP:
|
||||
err = bpf_prog_bind_map(&attr);
|
||||
break;
|
||||
default:
|
||||
err = -EINVAL;
|
||||
break;
|
||||
|
@@ -22,7 +22,8 @@ struct bpf_iter_seq_task_info {
|
||||
};
|
||||
|
||||
static struct task_struct *task_seq_get_next(struct pid_namespace *ns,
|
||||
u32 *tid)
|
||||
u32 *tid,
|
||||
bool skip_if_dup_files)
|
||||
{
|
||||
struct task_struct *task = NULL;
|
||||
struct pid *pid;
|
||||
@@ -36,6 +37,12 @@ retry:
|
||||
if (!task) {
|
||||
++*tid;
|
||||
goto retry;
|
||||
} else if (skip_if_dup_files && task->tgid != task->pid &&
|
||||
task->files == task->group_leader->files) {
|
||||
put_task_struct(task);
|
||||
task = NULL;
|
||||
++*tid;
|
||||
goto retry;
|
||||
}
|
||||
}
|
||||
rcu_read_unlock();
|
||||
@@ -48,7 +55,7 @@ static void *task_seq_start(struct seq_file *seq, loff_t *pos)
|
||||
struct bpf_iter_seq_task_info *info = seq->private;
|
||||
struct task_struct *task;
|
||||
|
||||
task = task_seq_get_next(info->common.ns, &info->tid);
|
||||
task = task_seq_get_next(info->common.ns, &info->tid, false);
|
||||
if (!task)
|
||||
return NULL;
|
||||
|
||||
@@ -65,7 +72,7 @@ static void *task_seq_next(struct seq_file *seq, void *v, loff_t *pos)
|
||||
++*pos;
|
||||
++info->tid;
|
||||
put_task_struct((struct task_struct *)v);
|
||||
task = task_seq_get_next(info->common.ns, &info->tid);
|
||||
task = task_seq_get_next(info->common.ns, &info->tid, false);
|
||||
if (!task)
|
||||
return NULL;
|
||||
|
||||
@@ -148,7 +155,7 @@ again:
|
||||
curr_files = *fstruct;
|
||||
curr_fd = info->fd;
|
||||
} else {
|
||||
curr_task = task_seq_get_next(ns, &curr_tid);
|
||||
curr_task = task_seq_get_next(ns, &curr_tid, true);
|
||||
if (!curr_task)
|
||||
return NULL;
|
||||
|
||||
|
@@ -7,6 +7,8 @@
|
||||
#include <linux/rbtree_latch.h>
|
||||
#include <linux/perf_event.h>
|
||||
#include <linux/btf.h>
|
||||
#include <linux/rcupdate_trace.h>
|
||||
#include <linux/rcupdate_wait.h>
|
||||
|
||||
/* dummy _ops. The verifier will operate on target program's ops. */
|
||||
const struct bpf_verifier_ops bpf_extension_verifier_ops = {
|
||||
@@ -63,7 +65,7 @@ static void bpf_trampoline_ksym_add(struct bpf_trampoline *tr)
|
||||
bpf_image_ksym_add(tr->image, ksym);
|
||||
}
|
||||
|
||||
struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
|
||||
static struct bpf_trampoline *bpf_trampoline_lookup(u64 key)
|
||||
{
|
||||
struct bpf_trampoline *tr;
|
||||
struct hlist_head *head;
|
||||
@@ -210,9 +212,12 @@ static int bpf_trampoline_update(struct bpf_trampoline *tr)
|
||||
* updates to trampoline would change the code from underneath the
|
||||
* preempted task. Hence wait for tasks to voluntarily schedule or go
|
||||
* to userspace.
|
||||
* The same trampoline can hold both sleepable and non-sleepable progs.
|
||||
* synchronize_rcu_tasks_trace() is needed to make sure all sleepable
|
||||
* programs finish executing.
|
||||
* Wait for these two grace periods together.
|
||||
*/
|
||||
|
||||
synchronize_rcu_tasks();
|
||||
synchronize_rcu_mult(call_rcu_tasks, call_rcu_tasks_trace);
|
||||
|
||||
err = arch_prepare_bpf_trampoline(new_image, new_image + PAGE_SIZE / 2,
|
||||
&tr->func.model, flags, tprogs,
|
||||
@@ -256,14 +261,12 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
|
||||
}
|
||||
}
|
||||
|
||||
int bpf_trampoline_link_prog(struct bpf_prog *prog)
|
||||
int bpf_trampoline_link_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
|
||||
{
|
||||
enum bpf_tramp_prog_type kind;
|
||||
struct bpf_trampoline *tr;
|
||||
int err = 0;
|
||||
int cnt;
|
||||
|
||||
tr = prog->aux->trampoline;
|
||||
kind = bpf_attach_type_to_tramp(prog);
|
||||
mutex_lock(&tr->mutex);
|
||||
if (tr->extension_prog) {
|
||||
@@ -296,7 +299,7 @@ int bpf_trampoline_link_prog(struct bpf_prog *prog)
|
||||
}
|
||||
hlist_add_head(&prog->aux->tramp_hlist, &tr->progs_hlist[kind]);
|
||||
tr->progs_cnt[kind]++;
|
||||
err = bpf_trampoline_update(prog->aux->trampoline);
|
||||
err = bpf_trampoline_update(tr);
|
||||
if (err) {
|
||||
hlist_del(&prog->aux->tramp_hlist);
|
||||
tr->progs_cnt[kind]--;
|
||||
@@ -307,13 +310,11 @@ out:
|
||||
}
|
||||
|
||||
/* bpf_trampoline_unlink_prog() should never fail. */
|
||||
int bpf_trampoline_unlink_prog(struct bpf_prog *prog)
|
||||
int bpf_trampoline_unlink_prog(struct bpf_prog *prog, struct bpf_trampoline *tr)
|
||||
{
|
||||
enum bpf_tramp_prog_type kind;
|
||||
struct bpf_trampoline *tr;
|
||||
int err;
|
||||
|
||||
tr = prog->aux->trampoline;
|
||||
kind = bpf_attach_type_to_tramp(prog);
|
||||
mutex_lock(&tr->mutex);
|
||||
if (kind == BPF_TRAMP_REPLACE) {
|
||||
@@ -325,12 +326,32 @@ int bpf_trampoline_unlink_prog(struct bpf_prog *prog)
|
||||
}
|
||||
hlist_del(&prog->aux->tramp_hlist);
|
||||
tr->progs_cnt[kind]--;
|
||||
err = bpf_trampoline_update(prog->aux->trampoline);
|
||||
err = bpf_trampoline_update(tr);
|
||||
out:
|
||||
mutex_unlock(&tr->mutex);
|
||||
return err;
|
||||
}
|
||||
|
||||
struct bpf_trampoline *bpf_trampoline_get(u64 key,
|
||||
struct bpf_attach_target_info *tgt_info)
|
||||
{
|
||||
struct bpf_trampoline *tr;
|
||||
|
||||
tr = bpf_trampoline_lookup(key);
|
||||
if (!tr)
|
||||
return NULL;
|
||||
|
||||
mutex_lock(&tr->mutex);
|
||||
if (tr->func.addr)
|
||||
goto out;
|
||||
|
||||
memcpy(&tr->func.model, &tgt_info->fmodel, sizeof(tgt_info->fmodel));
|
||||
tr->func.addr = (void *)tgt_info->tgt_addr;
|
||||
out:
|
||||
mutex_unlock(&tr->mutex);
|
||||
return tr;
|
||||
}
|
||||
|
||||
void bpf_trampoline_put(struct bpf_trampoline *tr)
|
||||
{
|
||||
if (!tr)
|
||||
@@ -344,7 +365,14 @@ void bpf_trampoline_put(struct bpf_trampoline *tr)
|
||||
if (WARN_ON_ONCE(!hlist_empty(&tr->progs_hlist[BPF_TRAMP_FEXIT])))
|
||||
goto out;
|
||||
bpf_image_ksym_del(&tr->ksym);
|
||||
/* wait for tasks to get out of trampoline before freeing it */
|
||||
/* This code will be executed when all bpf progs (both sleepable and
|
||||
* non-sleepable) went through
|
||||
* bpf_prog_put()->call_rcu[_tasks_trace]()->bpf_prog_free_deferred().
|
||||
* Hence no need for another synchronize_rcu_tasks_trace() here,
|
||||
* but synchronize_rcu_tasks() is still needed, since trampoline
|
||||
* may not have had any sleepable programs and we need to wait
|
||||
* for tasks to get out of trampoline code before freeing it.
|
||||
*/
|
||||
synchronize_rcu_tasks();
|
||||
bpf_jit_free_exec(tr->image);
|
||||
hlist_del(&tr->hlist);
|
||||
@@ -394,6 +422,17 @@ void notrace __bpf_prog_exit(struct bpf_prog *prog, u64 start)
|
||||
rcu_read_unlock();
|
||||
}
|
||||
|
||||
void notrace __bpf_prog_enter_sleepable(void)
|
||||
{
|
||||
rcu_read_lock_trace();
|
||||
might_fault();
|
||||
}
|
||||
|
||||
void notrace __bpf_prog_exit_sleepable(void)
|
||||
{
|
||||
rcu_read_unlock_trace();
|
||||
}
|
||||
|
||||
int __weak
|
||||
arch_prepare_bpf_trampoline(void *image, void *image_end,
|
||||
const struct btf_func_model *m, u32 flags,
|
||||
|
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user