26604a533208b7551def7e01df965bc4dda05cfa
102 Commits
Author | SHA1 | Message | Date | |
---|---|---|---|---|
![]() |
4ec3c2eea5 |
Merge 5.10.94 into android12-5.10-lts
Changes in 5.10.94 KVM: VMX: switch blocked_vcpu_on_cpu_lock to raw spinlock HID: uhid: Fix worker destroying device without any protection HID: wacom: Reset expected and received contact counts at the same time HID: wacom: Ignore the confidence flag when a touch is removed HID: wacom: Avoid using stale array indicies to read contact count f2fs: fix to do sanity check in is_alive() nfc: llcp: fix NULL error pointer dereference on sendmsg() after failed bind() mtd: rawnand: gpmi: Add ERR007117 protection for nfc_apply_timings mtd: rawnand: gpmi: Remove explicit default gpmi clock setting for i.MX6 mtd: Fixed breaking list in __mtd_del_partition. mtd: rawnand: davinci: Don't calculate ECC when reading page mtd: rawnand: davinci: Avoid duplicated page read mtd: rawnand: davinci: Rewrite function description x86/gpu: Reserve stolen memory for first integrated Intel GPU tools/nolibc: x86-64: Fix startup code bug tools/nolibc: i386: fix initial stack alignment tools/nolibc: fix incorrect truncation of exit code rtc: cmos: take rtc_lock while reading from CMOS media: v4l2-ioctl.c: readbuffers depends on V4L2_CAP_READWRITE media: flexcop-usb: fix control-message timeouts media: mceusb: fix control-message timeouts media: em28xx: fix control-message timeouts media: cpia2: fix control-message timeouts media: s2255: fix control-message timeouts media: dib0700: fix undefined behavior in tuner shutdown media: redrat3: fix control-message timeouts media: pvrusb2: fix control-message timeouts media: stk1160: fix control-message timeouts media: cec-pin: fix interrupt en/disable handling can: softing_cs: softingcs_probe(): fix memleak on registration failure iio: adc: ti-adc081c: Partial revert of removal of ACPI IDs lkdtm: Fix content of section containing lkdtm_rodata_do_nothing() iommu/io-pgtable-arm-v7s: Add error handle for page table allocation failure gpu: host1x: Add back arm_iommu_detach_device() dma_fence_array: Fix PENDING_ERROR leak in dma_fence_array_signaled() PCI: Add function 1 DMA alias quirk for Marvell 88SE9125 SATA controller mm_zone: add function to check if managed dma zone exists dma/pool: create dma atomic pool only if dma zone has managed pages mm/page_alloc.c: do not warn allocation failure on zone DMA if no managed pages shmem: fix a race between shmem_unused_huge_shrink and shmem_evict_inode drm/ttm: Put BO in its memory manager's lru list Bluetooth: L2CAP: Fix not initializing sk_peer_pid drm/bridge: display-connector: fix an uninitialized pointer in probe() drm: fix null-ptr-deref in drm_dev_init_release() drm/panel: kingdisplay-kd097d04: Delete panel on attach() failure drm/panel: innolux-p079zca: Delete panel on attach() failure drm/rockchip: dsi: Fix unbalanced clock on probe error drm/rockchip: dsi: Hold pm-runtime across bind/unbind drm/rockchip: dsi: Disable PLL clock on bind error drm/rockchip: dsi: Reconfigure hardware on resume() Bluetooth: cmtp: fix possible panic when cmtp_init_sockets() fails clk: bcm-2835: Pick the closest clock rate clk: bcm-2835: Remove rounding up the dividers drm/vc4: hdmi: Set a default HSM rate wcn36xx: ensure pairing of init_scan/finish_scan and start_scan/end_scan wcn36xx: Indicate beacon not connection loss on MISSED_BEACON_IND wcn36xx: Fix DMA channel enable/disable cycle wcn36xx: Release DMA channel descriptor allocations wcn36xx: Put DXE block into reset before freeing memory wcn36xx: populate band before determining rate on RX wcn36xx: fix RX BD rate mapping for 5GHz legacy rates ath11k: Send PPDU_STATS_CFG with proper pdev mask to firmware mtd: hyperbus: rpc-if: Check return value of rpcif_sw_init() media: videobuf2: Fix the size printk format media: atomisp: add missing media_device_cleanup() in atomisp_unregister_entities() media: atomisp: fix punit_ddr_dvfs_enable() argument for mrfld_power up case media: atomisp: fix inverted logic in buffers_needed() media: atomisp: do not use err var when checking port validity for ISP2400 media: atomisp: fix inverted error check for ia_css_mipi_is_source_port_valid() media: atomisp: fix ifdefs in sh_css.c media: staging: media: atomisp: pci: Balance braces around conditional statements in file atomisp_cmd.c media: atomisp: add NULL check for asd obtained from atomisp_video_pipe media: atomisp: fix enum formats logic media: atomisp: fix uninitialized bug in gmin_get_pmic_id_and_addr() media: aspeed: fix mode-detect always time out at 2nd run media: em28xx: fix memory leak in em28xx_init_dev media: aspeed: Update signal status immediately to ensure sane hw state arm64: dts: amlogic: meson-g12: Fix GPU operating point table node name arm64: dts: amlogic: Fix SPI NOR flash node name for ODROID N2/N2+ arm64: dts: meson-gxbb-wetek: fix HDMI in early boot arm64: dts: meson-gxbb-wetek: fix missing GPIO binding fs: dlm: use sk->sk_socket instead of con->sock fs: dlm: don't call kernel_getpeername() in error_report() memory: renesas-rpc-if: Return error in case devm_ioremap_resource() fails Bluetooth: stop proccessing malicious adv data ath11k: Fix ETSI regd with weather radar overlap ath11k: clear the keys properly via DISABLE_KEY ath11k: reset RSN/WPA present state for open BSS tee: fix put order in teedev_close_context() fs: dlm: fix build with CONFIG_IPV6 disabled drm/vboxvideo: fix a NULL vs IS_ERR() check arm64: dts: renesas: cat875: Add rx/tx delays media: dmxdev: fix UAF when dvb_register_device() fails crypto: qce - fix uaf on qce_ahash_register_one crypto: qce - fix uaf on qce_skcipher_register_one mtd: hyperbus: rpc-if: fix bug in rpcif_hb_remove ARM: dts: stm32: fix dtbs_check warning on ili9341 dts binding on stm32f429 disco crypto: qat - fix spelling mistake: "messge" -> "message" crypto: qat - remove unnecessary collision prevention step in PFVF crypto: qat - make pfvf send message direction agnostic crypto: qat - fix undetected PFVF timeout in ACK loop ath11k: Use host CE parameters for CE interrupts configuration arm64: dts: ti: k3-j721e: correct cache-sets info tty: serial: atmel: Check return code of dmaengine_submit() tty: serial: atmel: Call dma_async_issue_pending() mfd: atmel-flexcom: Remove #ifdef CONFIG_PM_SLEEP mfd: atmel-flexcom: Use .resume_noirq media: rcar-csi2: Correct the selection of hsfreqrange media: imx-pxp: Initialize the spinlock prior to using it media: si470x-i2c: fix possible memory leak in si470x_i2c_probe() media: mtk-vcodec: call v4l2_m2m_ctx_release first when file is released media: coda: fix CODA960 JPEG encoder buffer overflow media: venus: pm_helpers: Control core power domain manually media: venus: core, venc, vdec: Fix probe dependency error media: venus: core: Fix a potential NULL pointer dereference in an error handling path media: venus: core: Fix a resource leak in the error handling path of 'venus_probe()' thermal/drivers/imx: Implement runtime PM support netfilter: bridge: add support for pppoe filtering arm64: dts: qcom: msm8916: fix MMC controller aliases cgroup: Trace event cgroup id fields should be u64 ACPI: EC: Rework flushing of EC work while suspended to idle thermal/drivers/imx8mm: Enable ADC when enabling monitor drm/amdgpu: Fix a NULL pointer dereference in amdgpu_connector_lcd_native_mode() drm/radeon/radeon_kms: Fix a NULL pointer dereference in radeon_driver_open_kms() arm64: dts: ti: k3-j7200: Fix the L2 cache sets arm64: dts: ti: k3-j721e: Fix the L2 cache sets arm64: dts: ti: k3-j7200: Correct the d-cache-sets info tty: serial: uartlite: allow 64 bit address serial: amba-pl011: do not request memory region twice floppy: Fix hang in watchdog when disk is ejected staging: rtl8192e: return error code from rtllib_softmac_init() staging: rtl8192e: rtllib_module: fix error handle case in alloc_rtllib() Bluetooth: btmtksdio: fix resume failure sched/fair: Fix detection of per-CPU kthreads waking a task sched/fair: Fix per-CPU kthread and wakee stacking for asym CPU capacity bpf: Adjust BTF log size limit. bpf: Disallow BPF_LOG_KERNEL log level for bpf(BPF_BTF_LOAD) bpf: Remove config check to enable bpf support for branch records arm64: lib: Annotate {clear, copy}_page() as position-independent arm64: clear_page() shouldn't use DC ZVA when DCZID_EL0.DZP == 1 media: dib8000: Fix a memleak in dib8000_init() media: saa7146: mxb: Fix a NULL pointer dereference in mxb_attach() media: si2157: Fix "warm" tuner state detection wireless: iwlwifi: Fix a double free in iwl_txq_dyn_alloc_dma sched/rt: Try to restart rt period timer when rt runtime exceeded drm/msm/dp: displayPort driver need algorithm rational rcu/exp: Mark current CPU as exp-QS in IPI loop second pass mwifiex: Fix possible ABBA deadlock xfrm: fix a small bug in xfrm_sa_len() x86/uaccess: Move variable into switch case statement selftests: clone3: clone3: add case CLONE3_ARGS_NO_TEST selftests: harness: avoid false negatives if test has no ASSERTs crypto: stm32 - Fix last sparse warning in stm32_cryp_check_ctr_counter crypto: stm32/cryp - fix CTR counter carry crypto: stm32/cryp - fix xts and race condition in crypto_engine requests crypto: stm32/cryp - check early input data crypto: stm32/cryp - fix double pm exit crypto: stm32/cryp - fix lrw chaining mode crypto: stm32/cryp - fix bugs and crash in tests crypto: stm32 - Revert broken pm_runtime_resume_and_get changes ath11k: Fix deleting uninitialized kernel timer during fragment cache flush ARM: dts: gemini: NAS4220-B: fis-index-block with 128 KiB sectors media: dw2102: Fix use after free media: msi001: fix possible null-ptr-deref in msi001_probe() media: coda/imx-vdoa: Handle dma_set_coherent_mask error codes ath11k: Fix a NULL pointer dereference in ath11k_mac_op_hw_scan() arm64: dts: qcom: c630: Fix soundcard setup arm64: dts: qcom: ipq6018: Fix gpio-ranges property drm/msm/dpu: fix safe status debugfs file drm/bridge: ti-sn65dsi86: Set max register for regmap drm/tegra: vic: Fix DMA API misuse media: hantro: Fix probe func error path xfrm: interface with if_id 0 should return error xfrm: state and policy should fail if XFRMA_IF_ID 0 ARM: 9159/1: decompressor: Avoid UNPREDICTABLE NOP encoding usb: ftdi-elan: fix memory leak on device disconnect arm64: dts: marvell: cn9130: add GPIO and SPI aliases arm64: dts: marvell: cn9130: enable CP0 GPIO controllers ARM: dts: armada-38x: Add generic compatible to UART nodes iwlwifi: mvm: fix 32-bit build in FTM iwlwifi: mvm: test roc running status bits before removing the sta mmc: meson-mx-sdhc: add IRQ check mmc: meson-mx-sdio: add IRQ check selinux: fix potential memleak in selinux_add_opt() um: fix ndelay/udelay defines um: virtio_uml: Fix time-travel external time propagation Bluetooth: L2CAP: Fix using wrong mode bpftool: Enable line buffering for stdout backlight: qcom-wled: Validate enabled string indices in DT backlight: qcom-wled: Pass number of elements to read to read_u32_array backlight: qcom-wled: Fix off-by-one maximum with default num_strings backlight: qcom-wled: Override default length with qcom,enabled-strings backlight: qcom-wled: Use cpu_to_le16 macro to perform conversion backlight: qcom-wled: Respect enabled-strings in set_brightness software node: fix wrong node passed to find nargs_prop Bluetooth: hci_qca: Stop IBS timer during BT OFF x86/boot/compressed: Move CLANG_FLAGS to beginning of KBUILD_CFLAGS hwmon: (mr75203) fix wrong power-up delay value x86/mce/inject: Avoid out-of-bounds write when setting flags ACPI: scan: Create platform device for BCM4752 and LNV4752 ACPI nodes pcmcia: rsrc_nonstatic: Fix a NULL pointer dereference in __nonstatic_find_io_region() pcmcia: rsrc_nonstatic: Fix a NULL pointer dereference in nonstatic_find_mem_region() power: reset: mt6397: Check for null res pointer netfilter: ipt_CLUSTERIP: fix refcount leak in clusterip_tg_check() bpf: Don't promote bogus looking registers after null check. bpf: Fix SO_RCVBUF/SO_SNDBUF handling in _bpf_setsockopt(). netfilter: nft_set_pipapo: allocate pcpu scratch maps on clone ppp: ensure minimum packet size in ppp_write() rocker: fix a sleeping in atomic bug staging: greybus: audio: Check null pointer fsl/fman: Check for null pointer after calling devm_ioremap Bluetooth: hci_bcm: Check for error irq Bluetooth: hci_qca: Fix NULL vs IS_ERR_OR_NULL check in qca_serdev_probe usb: dwc3: qcom: Fix NULL vs IS_ERR checking in dwc3_qcom_probe HID: hid-uclogic-params: Invalid parameter check in uclogic_params_init HID: hid-uclogic-params: Invalid parameter check in uclogic_params_get_str_desc HID: hid-uclogic-params: Invalid parameter check in uclogic_params_huion_init HID: hid-uclogic-params: Invalid parameter check in uclogic_params_frame_init_v1_buttonpad debugfs: lockdown: Allow reading debugfs files that are not world readable net/mlx5e: Fix page DMA map/unmap attributes net/mlx5e: Don't block routes with nexthop objects in SW Revert "net/mlx5e: Block offload of outer header csum for UDP tunnels" net/mlx5: Set command entry semaphore up once got index free lib/mpi: Add the return value check of kcalloc() Bluetooth: L2CAP: uninitialized variables in l2cap_sock_setsockopt() spi: spi-meson-spifc: Add missing pm_runtime_disable() in meson_spifc_probe ax25: uninitialized variable in ax25_setsockopt() netrom: fix api breakage in nr_setsockopt() regmap: Call regmap_debugfs_exit() prior to _init() can: mcp251xfd: add missing newline to printed strings tpm: add request_locality before write TPM_INT_ENABLE tpm_tis: Fix an error handling path in 'tpm_tis_core_init()' can: softing: softing_startstop(): fix set but not used variable warning can: xilinx_can: xcan_probe(): check for error irq pcmcia: fix setting of kthread task states iwlwifi: mvm: Use div_s64 instead of do_div in iwl_mvm_ftm_rtt_smoothing() net: mcs7830: handle usb read errors properly ext4: avoid trim error on fs with small groups ALSA: jack: Add missing rwsem around snd_ctl_remove() calls ALSA: PCM: Add missing rwsem around snd_ctl_remove() calls ALSA: hda: Add missing rwsem around snd_ctl_remove() calls RDMA/bnxt_re: Scan the whole bitmap when checking if "disabling RCFW with pending cmd-bit" RDMA/hns: Validate the pkey index scsi: pm80xx: Update WARN_ON check in pm8001_mpi_build_cmd() clk: imx8mn: Fix imx8mn_clko1_sels powerpc/prom_init: Fix improper check of prom_getprop() ASoC: uniphier: drop selecting non-existing SND_SOC_UNIPHIER_AIO_DMA dt-bindings: thermal: Fix definition of cooling-maps contribution property powerpc/64s: Convert some cpu_setup() and cpu_restore() functions to C powerpc/perf: MMCR0 control for PMU registers under PMCC=00 powerpc/perf: move perf irq/nmi handling details into traps.c powerpc/irq: Add helper to set regs->softe powerpc/perf: Fix PMU callbacks to clear pending PMI before resetting an overflown PMC powerpc/32s: Fix shift-out-of-bounds in KASAN init clocksource: Reduce clocksource-skew threshold clocksource: Avoid accidental unstable marking of clocksources ALSA: oss: fix compile error when OSS_DEBUG is enabled ALSA: usb-audio: Drop superfluous '0' in Presonus Studio 1810c's ID char/mwave: Adjust io port register size binder: fix handling of error during copy openrisc: Add clone3 ABI wrapper iommu/io-pgtable-arm: Fix table descriptor paddr formatting scsi: ufs: Fix race conditions related to driver data RDMA/qedr: Fix reporting max_{send/recv}_wr attrs PCI/MSI: Fix pci_irq_vector()/pci_irq_get_affinity() powerpc/powermac: Add additional missing lockdep_register_key() RDMA/core: Let ib_find_gid() continue search even after empty entry RDMA/cma: Let cma_resolve_ib_dev() continue search even after empty entry ASoC: rt5663: Handle device_property_read_u32_array error codes of: unittest: fix warning on PowerPC frame size warning of: unittest: 64 bit dma address test requires arch support clk: stm32: Fix ltdc's clock turn off by clk_disable_unused() after system enter shell mips: add SYS_HAS_CPU_MIPS64_R5 config for MIPS Release 5 support mips: fix Kconfig reference to PHYS_ADDR_T_64BIT dmaengine: pxa/mmp: stop referencing config->slave_id iommu/amd: Remove iommu_init_ga() iommu/amd: Restore GA log/tail pointer on host resume ASoC: Intel: catpt: Test dmaengine_submit() result before moving on iommu/iova: Fix race between FQ timeout and teardown scsi: block: pm: Always set request queue runtime active in blk_post_runtime_resume() phy: uniphier-usb3ss: fix unintended writing zeros to PHY register ASoC: mediatek: Check for error clk pointer ASoC: samsung: idma: Check of ioremap return value misc: lattice-ecp3-config: Fix task hung when firmware load failed counter: stm32-lptimer-cnt: remove iio counter abi arm64: tegra: Fix Tegra194 HDA {clock,reset}-names ordering arm64: tegra: Remove non existent Tegra194 reset mips: lantiq: add support for clk_set_parent() mips: bcm63xx: add support for clk_set_parent() powerpc/xive: Add missing null check after calling kmalloc ASoC: fsl_mqs: fix MODULE_ALIAS RDMA/cxgb4: Set queue pair state when being queried ASoC: fsl_asrc: refine the check of available clock divider clk: bm1880: remove kfrees on static allocations of: base: Fix phandle argument length mismatch error message ARM: dts: omap3-n900: Fix lp5523 for multi color Bluetooth: Fix debugfs entry leak in hci_register_dev() fs: dlm: filter user dlm messages for kernel locks drm/lima: fix warning when CONFIG_DEBUG_SG=y & CONFIG_DMA_API_DEBUG=y selftests/bpf: Fix bpf_object leak in skb_ctx selftest ar5523: Fix null-ptr-deref with unexpected WDCMSG_TARGET_START reply drm/bridge: dw-hdmi: handle ELD when DRM_BRIDGE_ATTACH_NO_CONNECTOR drm/nouveau/pmu/gm200-: avoid touching PMU outside of DEVINIT/PREOS/ACR media: atomisp: fix try_fmt logic media: atomisp: set per-device's default mode media: atomisp-ov2680: Fix ov2680_set_fmt() clobbering the exposure ARM: shmobile: rcar-gen2: Add missing of_node_put() batman-adv: allow netlink usage in unprivileged containers media: atomisp: handle errors at sh_css_create_isp_params() ath11k: Fix crash caused by uninitialized TX ring usb: gadget: f_fs: Use stream_open() for endpoint files drm: panel-orientation-quirks: Add quirk for the Lenovo Yoga Book X91F/L HID: apple: Do not reset quirks when the Fn key is not found media: b2c2: Add missing check in flexcop_pci_isr: EDAC/synopsys: Use the quirk for version instead of ddr version ARM: imx: rename DEBUG_IMX21_IMX27_UART to DEBUG_IMX27_UART drm/amd/display: check top_pipe_to_program pointer drm/amdgpu/display: set vblank_disable_immediate for DC soc: ti: pruss: fix referenced node in error message mlxsw: pci: Add shutdown method in PCI driver drm/bridge: megachips: Ensure both bridges are probed before registration tty: serial: imx: disable UCR4_OREN in .stop_rx() instead of .shutdown() gpiolib: acpi: Do not set the IRQ type if the IRQ is already in use HSI: core: Fix return freed object in hsi_new_client crypto: jitter - consider 32 LSB for APT mwifiex: Fix skb_over_panic in mwifiex_usb_recv() rsi: Fix use-after-free in rsi_rx_done_handler() rsi: Fix out-of-bounds read in rsi_read_pkt() ath11k: Avoid NULL ptr access during mgmt tx cleanup media: venus: avoid calling core_clk_setrate() concurrently during concurrent video sessions ACPI / x86: Drop PWM2 device on Lenovo Yoga Book from always present table ACPI: Change acpi_device_always_present() into acpi_device_override_status() ACPI / x86: Allow specifying acpi_device_override_status() quirks by path ACPI / x86: Add not-present quirk for the PCI0.SDHB.BRC1 device on the GPD win arm64: dts: ti: j7200-main: Fix 'dtbs_check' serdes_ln_ctrl node usb: uhci: add aspeed ast2600 uhci support floppy: Add max size check for user space request x86/mm: Flush global TLB when switching to trampoline page-table drm: rcar-du: Fix CRTC timings when CMM is used media: uvcvideo: Increase UVC_CTRL_CONTROL_TIMEOUT to 5 seconds. media: rcar-vin: Update format alignment constraints media: saa7146: hexium_orion: Fix a NULL pointer dereference in hexium_attach() media: m920x: don't use stack on USB reads thunderbolt: Runtime PM activate both ends of the device link iwlwifi: mvm: synchronize with FW after multicast commands iwlwifi: mvm: avoid clearing a just saved session protection id ath11k: avoid deadlock by change ieee80211_queue_work for regd_update_work ath10k: Fix tx hanging net-sysfs: update the queue counts in the unregistration path net: phy: prefer 1000baseT over 1000baseKX gpio: aspeed: Convert aspeed_gpio.lock to raw_spinlock selftests/ftrace: make kprobe profile testcase description unique ath11k: Avoid false DEADLOCK warning reported by lockdep x86/mce: Allow instrumentation during task work queueing x86/mce: Mark mce_panic() noinstr x86/mce: Mark mce_end() noinstr x86/mce: Mark mce_read_aux() noinstr net: bonding: debug: avoid printing debug logs when bond is not notifying peers bpf: Do not WARN in bpf_warn_invalid_xdp_action() HID: quirks: Allow inverting the absolute X/Y values media: igorplugusb: receiver overflow should be reported media: saa7146: hexium_gemini: Fix a NULL pointer dereference in hexium_attach() mmc: core: Fixup storing of OCR for MMC_QUIRK_NONSTD_SDIO audit: ensure userspace is penalized the same as the kernel when under pressure arm64: dts: ls1028a-qds: move rtc node to the correct i2c bus arm64: tegra: Adjust length of CCPLEX cluster MMIO region PM: runtime: Add safety net to supplier device release cpufreq: Fix initialization of min and max frequency QoS requests usb: hub: Add delay for SuperSpeed hub resume to let links transit to U0 ath9k: Fix out-of-bound memcpy in ath9k_hif_usb_rx_stream rtw88: 8822c: update rx settings to prevent potential hw deadlock PM: AVS: qcom-cpr: Use div64_ul instead of do_div iwlwifi: fix leaks/bad data after failed firmware load iwlwifi: remove module loading failure message iwlwifi: mvm: Fix calculation of frame length iwlwifi: pcie: make sure prph_info is set when treating wakeup IRQ um: registers: Rename function names to avoid conflicts and build problems ath11k: Fix napi related hang Bluetooth: vhci: Set HCI_QUIRK_VALID_LE_STATES xfrm: rate limit SA mapping change message to user space drm/etnaviv: consider completed fence seqno in hang check jffs2: GC deadlock reading a page that is used in jffs2_write_begin() ACPICA: actypes.h: Expand the ACPI_ACCESS_ definitions ACPICA: Utilities: Avoid deleting the same object twice in a row ACPICA: Executer: Fix the REFCLASS_REFOF case in acpi_ex_opcode_1A_0T_1R() ACPICA: Fix wrong interpretation of PCC address ACPICA: Hardware: Do not flush CPU cache when entering S4 and S5 drm/amdgpu: fixup bad vram size on gmc v8 amdgpu/pm: Make sysfs pm attributes as read-only for VFs ACPI: battery: Add the ThinkPad "Not Charging" quirk btrfs: remove BUG_ON() in find_parent_nodes() btrfs: remove BUG_ON(!eie) in find_parent_nodes net: mdio: Demote probed message to debug print mac80211: allow non-standard VHT MCS-10/11 dm btree: add a defensive bounds check to insert_at() dm space map common: add bounds check to sm_ll_lookup_bitmap() mlxsw: pci: Avoid flow control for EMAD packets net: phy: marvell: configure RGMII delays for 88E1118 net: gemini: allow any RGMII interface mode regulator: qcom_smd: Align probe function with rpmh-regulator serial: pl010: Drop CR register reset on set_termios serial: core: Keep mctrl register state and cached copy in sync random: do not throw away excess input to crng_fast_load parisc: Avoid calling faulthandler_disabled() twice x86/kbuild: Enable CONFIG_KALLSYMS_ALL=y in the defconfigs powerpc/6xx: add missing of_node_put powerpc/powernv: add missing of_node_put powerpc/cell: add missing of_node_put powerpc/btext: add missing of_node_put powerpc/watchdog: Fix missed watchdog reset due to memory ordering race i2c: i801: Don't silently correct invalid transfer size powerpc/smp: Move setup_profiling_timer() under CONFIG_PROFILING i2c: mpc: Correct I2C reset procedure clk: meson: gxbb: Fix the SDM_EN bit for MPLL0 on GXBB powerpc/powermac: Add missing lockdep_register_key() KVM: PPC: Book3S: Suppress warnings when allocating too big memory slots KVM: PPC: Book3S: Suppress failed alloc warning in H_COPY_TOFROM_GUEST w1: Misuse of get_user()/put_user() reported by sparse nvmem: core: set size for sysfs bin file dm: fix alloc_dax error handling in alloc_dev scsi: lpfc: Trigger SLI4 firmware dump before doing driver cleanup ALSA: seq: Set upper limit of processed events MIPS: Loongson64: Use three arguments for slti powerpc/40x: Map 32Mbytes of memory at startup selftests/powerpc/spectre_v2: Return skip code when miss_percent is high powerpc: handle kdump appropriately with crash_kexec_post_notifiers option powerpc/fadump: Fix inaccurate CPU state info in vmcore generated with panic udf: Fix error handling in udf_new_inode() MIPS: OCTEON: add put_device() after of_find_device_by_node() irqchip/gic-v4: Disable redistributors' view of the VPE table at boot time i2c: designware-pci: Fix to change data types of hcnt and lcnt parameters MIPS: Octeon: Fix build errors using clang scsi: sr: Don't use GFP_DMA ASoC: mediatek: mt8173: fix device_node leak ASoC: mediatek: mt8183: fix device_node leak phy: mediatek: Fix missing check in mtk_mipi_tx_probe rpmsg: core: Clean up resources on announce_create failure. crypto: omap-aes - Fix broken pm_runtime_and_get() usage crypto: stm32/crc32 - Fix kernel BUG triggered in probe() crypto: caam - replace this_cpu_ptr with raw_cpu_ptr ubifs: Error path in ubifs_remount_rw() seems to wrongly free write buffers tpm: fix NPE on probe for missing device spi: uniphier: Fix a bug that doesn't point to private data correctly xen/gntdev: fix unmap notification order fuse: Pass correct lend value to filemap_write_and_wait_range() serial: Fix incorrect rs485 polarity on uart open cputime, cpuacct: Include guest time in user time in cpuacct.stat tracing/kprobes: 'nmissed' not showed correctly for kretprobe iwlwifi: mvm: Increase the scan timeout guard to 30 seconds s390/mm: fix 2KB pgtable release race device property: Fix fwnode_graph_devcon_match() fwnode leak drm/etnaviv: limit submit sizes drm/nouveau/kms/nv04: use vzalloc for nv04_display drm/bridge: analogix_dp: Make PSR-exit block less parisc: Fix lpa and lpa_user defines powerpc/64s/radix: Fix huge vmap false positive PCI: xgene: Fix IB window setup PCI: pciehp: Use down_read/write_nested(reset_lock) to fix lockdep errors PCI: pci-bridge-emul: Make expansion ROM Base Address register read-only PCI: pci-bridge-emul: Properly mark reserved PCIe bits in PCI config space PCI: pci-bridge-emul: Fix definitions of reserved bits PCI: pci-bridge-emul: Correctly set PCIe capabilities PCI: pci-bridge-emul: Set PCI_STATUS_CAP_LIST for PCIe device xfrm: fix policy lookup for ipv6 gre packets btrfs: fix deadlock between quota enable and other quota operations btrfs: check the root node for uptodate before returning it btrfs: respect the max size in the header when activating swap file ext4: make sure to reset inode lockdep class when quota enabling fails ext4: make sure quota gets properly shutdown on error ext4: fix a possible ABBA deadlock due to busy PA ext4: initialize err_blk before calling __ext4_get_inode_loc ext4: fix fast commit may miss tracking range for FALLOC_FL_ZERO_RANGE ext4: set csum seed in tmp inode while migrating to extents ext4: Fix BUG_ON in ext4_bread when write quota data ext4: use ext4_ext_remove_space() for fast commit replay delete range ext4: fast commit may miss tracking unwritten range during ftruncate ext4: destroy ext4_fc_dentry_cachep kmemcache on module removal ext4: fix null-ptr-deref in '__ext4_journal_ensure_credits' ext4: don't use the orphan list when migrating an inode drm/radeon: fix error handling in radeon_driver_open_kms of: base: Improve argument length mismatch error firmware: Update Kconfig help text for Google firmware can: mcp251xfd: mcp251xfd_tef_obj_read(): fix typo in error message media: rcar-csi2: Optimize the selection PHTW register drm/vc4: hdmi: Make sure the device is powered with CEC media: correct MEDIA_TEST_SUPPORT help text Documentation: dmaengine: Correctly describe dmatest with channel unset Documentation: ACPI: Fix data node reference documentation Documentation: refer to config RANDOMIZE_BASE for kernel address-space randomization Documentation: fix firewire.rst ABI file path error Bluetooth: hci_sync: Fix not setting adv set duration scsi: core: Show SCMD_LAST in text form dmaengine: uniphier-xdmac: Fix type of address variables RDMA/hns: Modify the mapping attribute of doorbell to device RDMA/rxe: Fix a typo in opcode name dmaengine: stm32-mdma: fix STM32_MDMA_CTBR_TSEL_MASK Revert "net/mlx5: Add retry mechanism to the command entry index allocation" powerpc/cell: Fix clang -Wimplicit-fallthrough warning powerpc/fsl/dts: Enable WA for erratum A-009885 on fman3l MDIO buses block: Fix fsync always failed if once failed bpftool: Remove inclusion of utilities.mak from Makefiles xdp: check prog type before updating BPF link perf evsel: Override attr->sample_period for non-libpfm4 events ipv4: update fib_info_cnt under spinlock protection ipv4: avoid quadratic behavior in netns dismantle net/fsl: xgmac_mdio: Add workaround for erratum A-009885 net/fsl: xgmac_mdio: Fix incorrect iounmap when removing module parisc: pdc_stable: Fix memory leak in pdcs_register_pathentries f2fs: compress: fix potential deadlock of compress file f2fs: fix to reserve space for IO align feature af_unix: annote lockless accesses to unix_tot_inflight & gc_in_progress clk: Emit a stern warning with writable debugfs enabled clk: si5341: Fix clock HW provider cleanup net/smc: Fix hung_task when removing SMC-R devices net: axienet: increase reset timeout net: axienet: Wait for PhyRstCmplt after core reset net: axienet: reset core on initialization prior to MDIO access net: axienet: add missing memory barriers net: axienet: limit minimum TX ring size net: axienet: Fix TX ring slot available check net: axienet: fix number of TX ring slots for available check net: axienet: fix for TX busy handling net: axienet: increase default TX ring size to 128 HID: vivaldi: fix handling devices not using numbered reports rtc: pxa: fix null pointer dereference vdpa/mlx5: Fix wrong configuration of virtio_version_1_0 virtio_ring: mark ring unused on error taskstats: Cleanup the use of task->exit_code inet: frags: annotate races around fqdir->dead and fqdir->high_thresh netns: add schedule point in ops_exit_list() xfrm: Don't accidentally set RTO_ONLINK in decode_session4() gre: Don't accidentally set RTO_ONLINK in gre_fill_metadata_dst() libcxgb: Don't accidentally set RTO_ONLINK in cxgb_find_route() perf script: Fix hex dump character output dmaengine: at_xdmac: Don't start transactions at tx_submit level dmaengine: at_xdmac: Start transfer for cyclic channels in issue_pending dmaengine: at_xdmac: Print debug message after realeasing the lock dmaengine: at_xdmac: Fix concurrency over xfers_list dmaengine: at_xdmac: Fix lld view setting dmaengine: at_xdmac: Fix at_xdmac_lld struct definition perf probe: Fix ppc64 'perf probe add events failed' case devlink: Remove misleading internal_flags from health reporter dump arm64: dts: qcom: msm8996: drop not documented adreno properties net: bonding: fix bond_xmit_broadcast return value error bug net_sched: restore "mpu xxx" handling bcmgenet: add WOL IRQ check net: ethernet: mtk_eth_soc: fix error checking in mtk_mac_config() net: sfp: fix high power modules without diagnostic monitoring net: mscc: ocelot: fix using match before it is set dt-bindings: display: meson-dw-hdmi: add missing sound-name-prefix property dt-bindings: display: meson-vpu: Add missing amlogic,canvas property dt-bindings: watchdog: Require samsung,syscon-phandle for Exynos7 scripts/dtc: dtx_diff: remove broken example from help text lib82596: Fix IRQ check in sni_82596_probe mm/hmm.c: allow VM_MIXEDMAP to work with hmm_range_fault lib/test_meminit: destroy cache in kmem_cache_alloc_bulk() test mtd: nand: bbt: Fix corner case in bad block table handling ath10k: Fix the MTU size on QCA9377 SDIO scripts: sphinx-pre-install: add required ctex dependency scripts: sphinx-pre-install: Fix ctex support on Debian Linux 5.10.94 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I857f2417c899508815a1ba13d1285fd400a1f133 |
||
![]() |
924886fa22 |
bpf: Disallow BPF_LOG_KERNEL log level for bpf(BPF_BTF_LOAD)
[ Upstream commit 866de407444398bc8140ea70de1dba5f91cc34ac ]
BPF_LOG_KERNEL is only used internally, so disallow bpf_btf_load()
to set log level as BPF_LOG_KERNEL. The same checking has already
been done in bpf_check(), so factor out a helper to check the
validity of log attributes and use it in both places.
Fixes:
|
||
![]() |
8b444656fa |
Merge 5.10.56 into android12-5.10-lts
Changes in 5.10.56 selftest: fix build error in tools/testing/selftests/vm/userfaultfd.c io_uring: fix null-ptr-deref in io_sq_offload_start() x86/asm: Ensure asm/proto.h can be included stand-alone pipe: make pipe writes always wake up readers btrfs: fix rw device counting in __btrfs_free_extra_devids btrfs: mark compressed range uptodate only if all bio succeed Revert "ACPI: resources: Add checks for ACPI IRQ override" ACPI: DPTF: Fix reading of attributes x86/kvm: fix vcpu-id indexed array sizes KVM: add missing compat KVM_CLEAR_DIRTY_LOG ocfs2: fix zero out valid data ocfs2: issue zeroout to EOF blocks can: j1939: j1939_xtp_rx_dat_one(): fix rxtimer value between consecutive TP.DT to 750ms can: raw: raw_setsockopt(): fix raw_rcv panic for sock UAF can: peak_usb: pcan_usb_handle_bus_evt(): fix reading rxerr/txerr values can: mcba_usb_start(): add missing urb->transfer_dma initialization can: usb_8dev: fix memory leak can: ems_usb: fix memory leak can: esd_usb2: fix memory leak alpha: register early reserved memory in memblock HID: wacom: Re-enable touch by default for Cintiq 24HDT / 27QHDT NIU: fix incorrect error return, missed in previous revert drm/amd/display: ensure dentist display clock update finished in DCN20 drm/amdgpu: Avoid printing of stack contents on firmware load error drm/amdgpu: Fix resource leak on probe error path blk-iocost: fix operation ordering in iocg_wake_fn() nfc: nfcsim: fix use after free during module unload cfg80211: Fix possible memory leak in function cfg80211_bss_update RDMA/bnxt_re: Fix stats counters bpf: Fix OOB read when printing XDP link fdinfo mac80211: fix enabling 4-address mode on a sta vif after assoc netfilter: conntrack: adjust stop timestamp to real expiry value netfilter: nft_nat: allow to specify layer 4 protocol NAT only i40e: Fix logic of disabling queues i40e: Fix firmware LLDP agent related warning i40e: Fix queue-to-TC mapping on Tx i40e: Fix log TC creation failure when max num of queues is exceeded tipc: fix implicit-connect for SYN+ tipc: fix sleeping in tipc accept routine net: Set true network header for ECN decapsulation net: qrtr: fix memory leaks ionic: remove intr coalesce update from napi ionic: fix up dim accounting for tx and rx ionic: count csum_none when offload enabled tipc: do not write skb_shinfo frags when doing decrytion octeontx2-pf: Fix interface down flag on error mlx4: Fix missing error code in mlx4_load_one() KVM: x86: Check the right feature bit for MSR_KVM_ASYNC_PF_ACK access net: llc: fix skb_over_panic drm/msm/dpu: Fix sm8250_mdp register length drm/msm/dp: Initialize the INTF_CONFIG register skmsg: Make sk_psock_destroy() static net/mlx5: Fix flow table chaining net/mlx5e: Fix nullptr in mlx5e_hairpin_get_mdev() sctp: fix return value check in __sctp_rcv_asconf_lookup tulip: windbond-840: Fix missing pci_disable_device() in probe and remove sis900: Fix missing pci_disable_device() in probe and remove can: hi311x: fix a signedness bug in hi3110_cmd() bpf: Introduce BPF nospec instruction for mitigating Spectre v4 bpf: Fix leakage due to insufficient speculative store bypass mitigation bpf: Remove superfluous aux sanitation on subprog rejection bpf: verifier: Allocate idmap scratch in verifier env bpf: Fix pointer arithmetic mask tightening under state pruning SMB3: fix readpage for large swap cache powerpc/pseries: Fix regression while building external modules Revert "perf map: Fix dso->nsinfo refcounting" i40e: Add additional info to PHY type error can: j1939: j1939_session_deactivate(): clarify lifetime of session object Linux 5.10.56 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Ib3c9244afb7ee5d6ee8d3235efe8956898f486c4 |
||
![]() |
be561c0154 |
bpf: Fix pointer arithmetic mask tightening under state pruning
commit e042aa532c84d18ff13291d00620502ce7a38dda upstream. In 7fedb63a8307 ("bpf: Tighten speculative pointer arithmetic mask") we narrowed the offset mask for unprivileged pointer arithmetic in order to mitigate a corner case where in the speculative domain it is possible to advance, for example, the map value pointer by up to value_size-1 out-of- bounds in order to leak kernel memory via side-channel to user space. The verifier's state pruning for scalars leaves one corner case open where in the first verification path R_x holds an unknown scalar with an aux->alu_limit of e.g. 7, and in a second verification path that same register R_x, here denoted as R_x', holds an unknown scalar which has tighter bounds and would thus satisfy range_within(R_x, R_x') as well as tnum_in(R_x, R_x') for state pruning, yielding an aux->alu_limit of 3: Given the second path fits the register constraints for pruning, the final generated mask from aux->alu_limit will remain at 7. While technically not wrong for the non-speculative domain, it would however be possible to craft similar cases where the mask would be too wide as in 7fedb63a8307. One way to fix it is to detect the presence of unknown scalar map pointer arithmetic and force a deeper search on unknown scalars to ensure that we do not run into a masking mismatch. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
||
![]() |
ffb9d5c48b |
bpf: verifier: Allocate idmap scratch in verifier env
commit c9e73e3d2b1eb1ea7ff068e05007eec3bd8ef1c9 upstream. func_states_equal makes a very short lived allocation for idmap, probably because it's too large to fit on the stack. However the function is called quite often, leading to a lot of alloc / free churn. Replace the temporary allocation with dedicated scratch space in struct bpf_verifier_env. Signed-off-by: Lorenz Bauer <lmb@cloudflare.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Edward Cree <ecree.xilinx@gmail.com> Link: https://lore.kernel.org/bpf/20210429134656.122225-4-lmb@cloudflare.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
||
![]() |
0e9280654a |
bpf: Fix leakage due to insufficient speculative store bypass mitigation
[ Upstream commit 2039f26f3aca5b0e419b98f65dd36481337b86ee ] Spectre v4 gadgets make use of memory disambiguation, which is a set of techniques that execute memory access instructions, that is, loads and stores, out of program order; Intel's optimization manual, section 2.4.4.5: A load instruction micro-op may depend on a preceding store. Many microarchitectures block loads until all preceding store addresses are known. The memory disambiguator predicts which loads will not depend on any previous stores. When the disambiguator predicts that a load does not have such a dependency, the load takes its data from the L1 data cache. Eventually, the prediction is verified. If an actual conflict is detected, the load and all succeeding instructions are re-executed. |
||
![]() |
37485a3025 |
ANDROID: add kabi padding for structures for the android12 release
There are a lot of different structures that need to have a "frozen" abi for the next 5+ years. Add padding to a lot of them in order to be able to handle any future changes that might be needed due to LTS and security fixes that might come up. It's a best guess, based on what has happened in the past from the 5.4.0..5.4.129 release (1 1/2 years). Yes, past changes do not mean that future changes will also be needed in the same area, but that is a hint that those areas are both well maintained and looked after, and there have been previous problems found in them. Also the list of structures that are being required based on OEM usage in the android/ symbol lists were consulted as that's a larger list than what has been changed in the past. Hopefully we caught everything we need to worry about, only time will tell... Bug: 151154716 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I880bbcda0628a7459988eeb49d18655522697664 |
||
![]() |
2fa15d61e4 |
bpf: Fix leakage of uninitialized bpf stack under speculation
commit 801c6058d14a82179a7ee17a4b532cac6fad067f upstream. The current implemented mechanisms to mitigate data disclosure under speculation mainly address stack and map value oob access from the speculative domain. However, Piotr discovered that uninitialized BPF stack is not protected yet, and thus old data from the kernel stack, potentially including addresses of kernel structures, could still be extracted from that 512 bytes large window. The BPF stack is special compared to map values since it's not zero initialized for every program invocation, whereas map values /are/ zero initialized upon their initial allocation and thus cannot leak any prior data in either domain. In the non-speculative domain, the verifier ensures that every stack slot read must have a prior stack slot write by the BPF program to avoid such data leaking issue. However, this is not enough: for example, when the pointer arithmetic operation moves the stack pointer from the last valid stack offset to the first valid offset, the sanitation logic allows for any intermediate offsets during speculative execution, which could then be used to extract any restricted stack content via side-channel. Given for unprivileged stack pointer arithmetic the use of unknown but bounded scalars is generally forbidden, we can simply turn the register-based arithmetic operation into an immediate-based arithmetic operation without the need for masking. This also gives the benefit of reducing the needed instructions for the operation. Given after the work in 7fedb63a8307 ("bpf: Tighten speculative pointer arithmetic mask"), the aux->alu_limit already holds the final immediate value for the offset register with the known scalar. Thus, a simple mov of the immediate to AX register with using AX as the source for the original instruction is sufficient and possible now in this case. Reported-by: Piotr Krysiuk <piotras@gmail.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Tested-by: Piotr Krysiuk <piotras@gmail.com> Reviewed-by: Piotr Krysiuk <piotras@gmail.com> Reviewed-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
||
![]() |
f3c4b01689 |
bpf: Allow variable-offset stack access
[ Upstream commit 01f810ace9ed37255f27608a0864abebccf0aab3 ] Before this patch, variable offset access to the stack was dissalowed for regular instructions, but was allowed for "indirect" accesses (i.e. helpers). This patch removes the restriction, allowing reading and writing to the stack through stack pointers with variable offsets. This makes stack-allocated buffers more usable in programs, and brings stack pointers closer to other types of pointers. The motivation is being able to use stack-allocated buffers for data manipulation. When the stack size limit is sufficient, allocating buffers on the stack is simpler than per-cpu arrays, or other alternatives. In unpriviledged programs, variable-offset reads and writes are disallowed (they were already disallowed for the indirect access case) because the speculative execution checking code doesn't support them. Additionally, when writing through a variable-offset stack pointer, if any pointers are in the accessible range, there's possilibities of later leaking pointers because the write cannot be tracked precisely. Writes with variable offset mark the whole range as initialized, even though we don't know which stack slots are actually written. This is in order to not reject future reads to these slots. Note that this doesn't affect writes done through helpers; like before, helpers need the whole stack range to be initialized to begin with. All the stack slots are in range are considered scalars after the write; variable-offset register spills are not tracked. For reads, all the stack slots in the variable range needs to be initialized (but see above about what writes do), otherwise the read is rejected. All register spilled in stack slots that might be read are marked as having been read, however reads through such pointers don't do register filling; the target register will always be either a scalar or a constant zero. Signed-off-by: Andrei Matei <andreimatei1@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/20210207011027.676572-2-andreimatei1@gmail.com Signed-off-by: Sasha Levin <sashal@kernel.org> |
||
![]() |
4976b718c3 |
bpf: Introduce pseudo_btf_id
Pseudo_btf_id is a type of ld_imm insn that associates a btf_id to a ksym so that further dereferences on the ksym can use the BTF info to validate accesses. Internally, when seeing a pseudo_btf_id ld insn, the verifier reads the btf_id stored in the insn[0]'s imm field and marks the dst_reg as PTR_TO_BTF_ID. The btf_id points to a VAR_KIND, which is encoded in btf_vminux by pahole. If the VAR is not of a struct type, the dst reg will be marked as PTR_TO_MEM instead of PTR_TO_BTF_ID and the mem_size is resolved to the size of the VAR's type. >From the VAR btf_id, the verifier can also read the address of the ksym's corresponding kernel var from kallsyms and use that to fill dst_reg. Therefore, the proper functionality of pseudo_btf_id depends on (1) kallsyms and (2) the encoding of kernel global VARs in pahole, which should be available since pahole v1.18. Signed-off-by: Hao Luo <haoluo@google.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20200929235049.2533242-2-haoluo@google.com |
||
![]() |
f7b12b6fea |
bpf: verifier: refactor check_attach_btf_id()
The check_attach_btf_id() function really does three things: 1. It performs a bunch of checks on the program to ensure that the attachment is valid. 2. It stores a bunch of state about the attachment being requested in the verifier environment and struct bpf_prog objects. 3. It allocates a trampoline for the attachment. This patch splits out (1.) and (3.) into separate functions which will perform the checks, but return the computed values instead of directly modifying the environment. This is done in preparation for reusing the checks when the actual attachment is happening, which will allow tracing programs to have multiple (compatible) attachments. This also fixes a bug where a bunch of checks were skipped if a trampoline already existed for the tracing target. Fixes: |
||
![]() |
efc68158c4 |
bpf: change logging calls from verbose() to bpf_log() and use log pointer
In preparation for moving code around, change a bunch of references to env->log (and the verbose() logging helper) to use bpf_log() and a direct pointer to struct bpf_verifier_log. While we're touching the function signature, mark the 'prog' argument to bpf_check_type_match() as const. Also enhance the bpf_verifier_log_needed() check to handle NULL pointers for the log struct so we can re-use the code with logging disabled. Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
09b28d76ea |
bpf: Add abnormal return checks.
LD_[ABS|IND] instructions may return from the function early. bpf_tail_call pseudo instruction is either fallthrough or return. Allow them in the subprograms only when subprograms are BTF annotated and have scalar return types. Allow ld_abs and tail_call in the main program even if it calls into subprograms. In the past that was not ok to do for ld_abs, since it was JITed with special exit sequence. Since bpf_gen_ld_abs() was introduced the ld_abs looks like normal exit insn from JIT point of view, so it's safe to allow them in the main program. Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
ebf7d1f508 |
bpf, x64: rework pro/epilogue and tailcall handling in JIT
This commit serves two things: 1) it optimizes BPF prologue/epilogue generation 2) it makes possible to have tailcalls within BPF subprogram Both points are related to each other since without 1), 2) could not be achieved. In [1], Alexei says: "The prologue will look like: nop5 xor eax,eax // two new bytes if bpf_tail_call() is used in this // function push rbp mov rbp, rsp sub rsp, rounded_stack_depth push rax // zero init tail_call counter variable number of push rbx,r13,r14,r15 Then bpf_tail_call will pop variable number rbx,.. and final 'pop rax' Then 'add rsp, size_of_current_stack_frame' jmp to next function and skip over 'nop5; xor eax,eax; push rpb; mov rbp, rsp' This way new function will set its own stack size and will init tail call counter with whatever value the parent had. If next function doesn't use bpf_tail_call it won't have 'xor eax,eax'. Instead it would need to have 'nop2' in there." Implement that suggestion. Since the layout of stack is changed, tail call counter handling can not rely anymore on popping it to rbx just like it have been handled for constant prologue case and later overwrite of rbx with actual value of rbx pushed to stack. Therefore, let's use one of the register (%rcx) that is considered to be volatile/caller-saved and pop the value of tail call counter in there in the epilogue. Drop the BUILD_BUG_ON in emit_prologue and in emit_bpf_tail_call_indirect where instruction layout is not constant anymore. Introduce new poke target, 'tailcall_bypass' to poke descriptor that is dedicated for skipping the register pops and stack unwind that are generated right before the actual jump to target program. For case when the target program is not present, BPF program will skip the pop instructions and nop5 dedicated for jmpq $target. An example of such state when only R6 of callee saved registers is used by program: ffffffffc0513aa1: e9 0e 00 00 00 jmpq 0xffffffffc0513ab4 ffffffffc0513aa6: 5b pop %rbx ffffffffc0513aa7: 58 pop %rax ffffffffc0513aa8: 48 81 c4 00 00 00 00 add $0x0,%rsp ffffffffc0513aaf: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) ffffffffc0513ab4: 48 89 df mov %rbx,%rdi When target program is inserted, the jump that was there to skip pops/nop5 will become the nop5, so CPU will go over pops and do the actual tailcall. One might ask why there simply can not be pushes after the nop5? In the following example snippet: ffffffffc037030c: 48 89 fb mov %rdi,%rbx (...) ffffffffc0370332: 5b pop %rbx ffffffffc0370333: 58 pop %rax ffffffffc0370334: 48 81 c4 00 00 00 00 add $0x0,%rsp ffffffffc037033b: 0f 1f 44 00 00 nopl 0x0(%rax,%rax,1) ffffffffc0370340: 48 81 ec 00 00 00 00 sub $0x0,%rsp ffffffffc0370347: 50 push %rax ffffffffc0370348: 53 push %rbx ffffffffc0370349: 48 89 df mov %rbx,%rdi ffffffffc037034c: e8 f7 21 00 00 callq 0xffffffffc0372548 There is the bpf2bpf call (at ffffffffc037034c) right after the tailcall and jump target is not present. ctx is in %rbx register and BPF subprogram that we will call into on ffffffffc037034c is relying on it, e.g. it will pick ctx from there. Such code layout is therefore broken as we would overwrite the content of %rbx with the value that was pushed on the prologue. That is the reason for the 'bypass' approach. Special care needs to be taken during the install/update/remove of tailcall target. In case when target program is not present, the CPU must not execute the pop instructions that precede the tailcall. To address that, the following states can be defined: A nop, unwind, nop B nop, unwind, tail C skip, unwind, nop D skip, unwind, tail A is forbidden (lead to incorrectness). The state transitions between tailcall install/update/remove will work as follows: First install tail call f: C->D->B(f) * poke the tailcall, after that get rid of the skip Update tail call f to f': B(f)->B(f') * poke the tailcall (poke->tailcall_target) and do NOT touch the poke->tailcall_bypass Remove tail call: B(f')->C(f') * poke->tailcall_bypass is poked back to jump, then we wait the RCU grace period so that other programs will finish its execution and after that we are safe to remove the poke->tailcall_target Install new tail call (f''): C(f')->D(f'')->B(f''). * same as first step This way CPU can never be exposed to "unwind, tail" state. Last but not least, when tailcalls get mixed with bpf2bpf calls, it would be possible to encounter the endless loop due to clearing the tailcall counter if for example we would use the tailcall3-like from BPF selftests program that would be subprogram-based, meaning the tailcall would be present within the BPF subprogram. This test, broken down to particular steps, would do: entry -> set tailcall counter to 0, bump it by 1, tailcall to func0 func0 -> call subprog_tail (we are NOT skipping the first 11 bytes of prologue and this subprogram has a tailcall, therefore we clear the counter...) subprog -> do the same thing as entry and then loop forever. To address this, the idea is to go through the call chain of bpf2bpf progs and look for a tailcall presence throughout whole chain. If we saw a single tail call then each node in this call chain needs to be marked as a subprog that can reach the tailcall. We would later feed the JIT with this info and: - set eax to 0 only when tailcall is reachable and this is the entry prog - if tailcall is reachable but there's no tailcall in insns of currently JITed prog then push rax anyway, so that it will be possible to propagate further down the call chain - finally if tailcall is reachable, then we need to precede the 'call' insn with mov rax, [rbp - (stack_depth + 8)] Tail call related cases from test_verifier kselftest are also working fine. Sample BPF programs that utilize tail calls (sockex3, tracex5) work properly as well. [1]: https://lore.kernel.org/bpf/20200517043227.2gpq22ifoq37ogst@ast-mbp.dhcp.thefacebook.com/ Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
7f6e4312e1 |
bpf: Limit caller's stack depth 256 for subprogs with tailcalls
Protect against potential stack overflow that might happen when bpf2bpf calls get combined with tailcalls. Limit the caller's stack depth for such case down to 256 so that the worst case scenario would result in 8k stack size (32 which is tailcall limit * 256 = 8k). Suggested-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
41c48f3a98 |
bpf: Support access to bpf map fields
There are multiple use-cases when it's convenient to have access to bpf map fields, both `struct bpf_map` and map type specific struct-s such as `struct bpf_array`, `struct bpf_htab`, etc. For example while working with sock arrays it can be necessary to calculate the key based on map->max_entries (some_hash % max_entries). Currently this is solved by communicating max_entries via "out-of-band" channel, e.g. via additional map with known key to get info about target map. That works, but is not very convenient and error-prone while working with many maps. In other cases necessary data is dynamic (i.e. unknown at loading time) and it's impossible to get it at all. For example while working with a hash table it can be convenient to know how much capacity is already used (bpf_htab.count.counter for BPF_F_NO_PREALLOC case). At the same time kernel knows this info and can provide it to bpf program. Fill this gap by adding support to access bpf map fields from bpf program for both `struct bpf_map` and map type specific fields. Support is implemented via btf_struct_access() so that a user can define their own `struct bpf_map` or map type specific struct in their program with only necessary fields and preserve_access_index attribute, cast a map to this struct and use a field. For example: struct bpf_map { __u32 max_entries; } __attribute__((preserve_access_index)); struct bpf_array { struct bpf_map map; __u32 elem_size; } __attribute__((preserve_access_index)); struct { __uint(type, BPF_MAP_TYPE_ARRAY); __uint(max_entries, 4); __type(key, __u32); __type(value, __u32); } m_array SEC(".maps"); SEC("cgroup_skb/egress") int cg_skb(void *ctx) { struct bpf_array *array = (struct bpf_array *)&m_array; struct bpf_map *map = (struct bpf_map *)&m_array; /* .. use map->max_entries or array->map.max_entries .. */ } Similarly to other btf_struct_access() use-cases (e.g. struct tcp_sock in net/ipv4/bpf_tcp_ca.c) the patch allows access to any fields of corresponding struct. Only reading from map fields is supported. For btf_struct_access() to work there should be a way to know btf id of a struct that corresponds to a map type. To get btf id there should be a way to get a stringified name of map-specific struct, such as "bpf_array", "bpf_htab", etc for a map type. Two new fields are added to `struct bpf_map_ops` to handle it: * .map_btf_name keeps a btf name of a struct returned by map_alloc(); * .map_btf_id is used to cache btf id of that struct. To make btf ids calculation cheaper they're calculated once while preparing btf_vmlinux and cached same way as it's done for btf_id field of `struct bpf_func_proto` While calculating btf ids, struct names are NOT checked for collision. Collisions will be checked as a part of the work to prepare btf ids used in verifier in compile time that should land soon. The only known collision for `struct bpf_htab` (kernel/bpf/hashtab.c vs net/core/sock_map.c) was fixed earlier. Both new fields .map_btf_name and .map_btf_id must be set for a map type for the feature to work. If neither is set for a map type, verifier will return ENOTSUPP on a try to access map_ptr of corresponding type. If just one of them set, it's verifier misconfiguration. Only `struct bpf_array` for BPF_MAP_TYPE_ARRAY and `struct bpf_htab` for BPF_MAP_TYPE_HASH are supported by this patch. Other map types will be supported separately. The feature is available only for CONFIG_DEBUG_INFO_BTF=y and gated by perfmon_capable() so that unpriv programs won't have access to bpf map fields. Signed-off-by: Andrey Ignatov <rdna@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: John Fastabend <john.fastabend@gmail.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/6479686a0cd1e9067993df57b4c3eef0e276fec9.1592600985.git.rdna@fb.com |
||
![]() |
457f44363a |
bpf: Implement BPF ring buffer and verifier support for it
This commit adds a new MPSC ring buffer implementation into BPF ecosystem, which allows multiple CPUs to submit data to a single shared ring buffer. On the consumption side, only single consumer is assumed. Motivation ---------- There are two distinctive motivators for this work, which are not satisfied by existing perf buffer, which prompted creation of a new ring buffer implementation. - more efficient memory utilization by sharing ring buffer across CPUs; - preserving ordering of events that happen sequentially in time, even across multiple CPUs (e.g., fork/exec/exit events for a task). These two problems are independent, but perf buffer fails to satisfy both. Both are a result of a choice to have per-CPU perf ring buffer. Both can be also solved by having an MPSC implementation of ring buffer. The ordering problem could technically be solved for perf buffer with some in-kernel counting, but given the first one requires an MPSC buffer, the same solution would solve the second problem automatically. Semantics and APIs ------------------ Single ring buffer is presented to BPF programs as an instance of BPF map of type BPF_MAP_TYPE_RINGBUF. Two other alternatives considered, but ultimately rejected. One way would be to, similar to BPF_MAP_TYPE_PERF_EVENT_ARRAY, make BPF_MAP_TYPE_RINGBUF could represent an array of ring buffers, but not enforce "same CPU only" rule. This would be more familiar interface compatible with existing perf buffer use in BPF, but would fail if application needed more advanced logic to lookup ring buffer by arbitrary key. HASH_OF_MAPS addresses this with current approach. Additionally, given the performance of BPF ringbuf, many use cases would just opt into a simple single ring buffer shared among all CPUs, for which current approach would be an overkill. Another approach could introduce a new concept, alongside BPF map, to represent generic "container" object, which doesn't necessarily have key/value interface with lookup/update/delete operations. This approach would add a lot of extra infrastructure that has to be built for observability and verifier support. It would also add another concept that BPF developers would have to familiarize themselves with, new syntax in libbpf, etc. But then would really provide no additional benefits over the approach of using a map. BPF_MAP_TYPE_RINGBUF doesn't support lookup/update/delete operations, but so doesn't few other map types (e.g., queue and stack; array doesn't support delete, etc). The approach chosen has an advantage of re-using existing BPF map infrastructure (introspection APIs in kernel, libbpf support, etc), being familiar concept (no need to teach users a new type of object in BPF program), and utilizing existing tooling (bpftool). For common scenario of using a single ring buffer for all CPUs, it's as simple and straightforward, as would be with a dedicated "container" object. On the other hand, by being a map, it can be combined with ARRAY_OF_MAPS and HASH_OF_MAPS map-in-maps to implement a wide variety of topologies, from one ring buffer for each CPU (e.g., as a replacement for perf buffer use cases), to a complicated application hashing/sharding of ring buffers (e.g., having a small pool of ring buffers with hashed task's tgid being a look up key to preserve order, but reduce contention). Key and value sizes are enforced to be zero. max_entries is used to specify the size of ring buffer and has to be a power of 2 value. There are a bunch of similarities between perf buffer (BPF_MAP_TYPE_PERF_EVENT_ARRAY) and new BPF ring buffer semantics: - variable-length records; - if there is no more space left in ring buffer, reservation fails, no blocking; - memory-mappable data area for user-space applications for ease of consumption and high performance; - epoll notifications for new incoming data; - but still the ability to do busy polling for new data to achieve the lowest latency, if necessary. BPF ringbuf provides two sets of APIs to BPF programs: - bpf_ringbuf_output() allows to *copy* data from one place to a ring buffer, similarly to bpf_perf_event_output(); - bpf_ringbuf_reserve()/bpf_ringbuf_commit()/bpf_ringbuf_discard() APIs split the whole process into two steps. First, a fixed amount of space is reserved. If successful, a pointer to a data inside ring buffer data area is returned, which BPF programs can use similarly to a data inside array/hash maps. Once ready, this piece of memory is either committed or discarded. Discard is similar to commit, but makes consumer ignore the record. bpf_ringbuf_output() has disadvantage of incurring extra memory copy, because record has to be prepared in some other place first. But it allows to submit records of the length that's not known to verifier beforehand. It also closely matches bpf_perf_event_output(), so will simplify migration significantly. bpf_ringbuf_reserve() avoids the extra copy of memory by providing a memory pointer directly to ring buffer memory. In a lot of cases records are larger than BPF stack space allows, so many programs have use extra per-CPU array as a temporary heap for preparing sample. bpf_ringbuf_reserve() avoid this needs completely. But in exchange, it only allows a known constant size of memory to be reserved, such that verifier can verify that BPF program can't access memory outside its reserved record space. bpf_ringbuf_output(), while slightly slower due to extra memory copy, covers some use cases that are not suitable for bpf_ringbuf_reserve(). The difference between commit and discard is very small. Discard just marks a record as discarded, and such records are supposed to be ignored by consumer code. Discard is useful for some advanced use-cases, such as ensuring all-or-nothing multi-record submission, or emulating temporary malloc()/free() within single BPF program invocation. Each reserved record is tracked by verifier through existing reference-tracking logic, similar to socket ref-tracking. It is thus impossible to reserve a record, but forget to submit (or discard) it. bpf_ringbuf_query() helper allows to query various properties of ring buffer. Currently 4 are supported: - BPF_RB_AVAIL_DATA returns amount of unconsumed data in ring buffer; - BPF_RB_RING_SIZE returns the size of ring buffer; - BPF_RB_CONS_POS/BPF_RB_PROD_POS returns current logical possition of consumer/producer, respectively. Returned values are momentarily snapshots of ring buffer state and could be off by the time helper returns, so this should be used only for debugging/reporting reasons or for implementing various heuristics, that take into account highly-changeable nature of some of those characteristics. One such heuristic might involve more fine-grained control over poll/epoll notifications about new data availability in ring buffer. Together with BPF_RB_NO_WAKEUP/BPF_RB_FORCE_WAKEUP flags for output/commit/discard helpers, it allows BPF program a high degree of control and, e.g., more efficient batched notifications. Default self-balancing strategy, though, should be adequate for most applications and will work reliable and efficiently already. Design and implementation ------------------------- This reserve/commit schema allows a natural way for multiple producers, either on different CPUs or even on the same CPU/in the same BPF program, to reserve independent records and work with them without blocking other producers. This means that if BPF program was interruped by another BPF program sharing the same ring buffer, they will both get a record reserved (provided there is enough space left) and can work with it and submit it independently. This applies to NMI context as well, except that due to using a spinlock during reservation, in NMI context, bpf_ringbuf_reserve() might fail to get a lock, in which case reservation will fail even if ring buffer is not full. The ring buffer itself internally is implemented as a power-of-2 sized circular buffer, with two logical and ever-increasing counters (which might wrap around on 32-bit architectures, that's not a problem): - consumer counter shows up to which logical position consumer consumed the data; - producer counter denotes amount of data reserved by all producers. Each time a record is reserved, producer that "owns" the record will successfully advance producer counter. At that point, data is still not yet ready to be consumed, though. Each record has 8 byte header, which contains the length of reserved record, as well as two extra bits: busy bit to denote that record is still being worked on, and discard bit, which might be set at commit time if record is discarded. In the latter case, consumer is supposed to skip the record and move on to the next one. Record header also encodes record's relative offset from the beginning of ring buffer data area (in pages). This allows bpf_ringbuf_commit()/bpf_ringbuf_discard() to accept only the pointer to the record itself, without requiring also the pointer to ring buffer itself. Ring buffer memory location will be restored from record metadata header. This significantly simplifies verifier, as well as improving API usability. Producer counter increments are serialized under spinlock, so there is a strict ordering between reservations. Commits, on the other hand, are completely lockless and independent. All records become available to consumer in the order of reservations, but only after all previous records where already committed. It is thus possible for slow producers to temporarily hold off submitted records, that were reserved later. Reservation/commit/consumer protocol is verified by litmus tests in Documentation/litmus-test/bpf-rb. One interesting implementation bit, that significantly simplifies (and thus speeds up as well) implementation of both producers and consumers is how data area is mapped twice contiguously back-to-back in the virtual memory. This allows to not take any special measures for samples that have to wrap around at the end of the circular buffer data area, because the next page after the last data page would be first data page again, and thus the sample will still appear completely contiguous in virtual memory. See comment and a simple ASCII diagram showing this visually in bpf_ringbuf_area_alloc(). Another feature that distinguishes BPF ringbuf from perf ring buffer is a self-pacing notifications of new data being availability. bpf_ringbuf_commit() implementation will send a notification of new record being available after commit only if consumer has already caught up right up to the record being committed. If not, consumer still has to catch up and thus will see new data anyways without needing an extra poll notification. Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c) show that this allows to achieve a very high throughput without having to resort to tricks like "notify only every Nth sample", which are necessary with perf buffer. For extreme cases, when BPF program wants more manual control of notifications, commit/discard/output helpers accept BPF_RB_NO_WAKEUP and BPF_RB_FORCE_WAKEUP flags, which give full control over notifications of data availability, but require extra caution and diligence in using this API. Comparison to alternatives -------------------------- Before considering implementing BPF ring buffer from scratch existing alternatives in kernel were evaluated, but didn't seem to meet the needs. They largely fell into few categores: - per-CPU buffers (perf, ftrace, etc), which don't satisfy two motivations outlined above (ordering and memory consumption); - linked list-based implementations; while some were multi-producer designs, consuming these from user-space would be very complicated and most probably not performant; memory-mapping contiguous piece of memory is simpler and more performant for user-space consumers; - io_uring is SPSC, but also requires fixed-sized elements. Naively turning SPSC queue into MPSC w/ lock would have subpar performance compared to locked reserve + lockless commit, as with BPF ring buffer. Fixed sized elements would be too limiting for BPF programs, given existing BPF programs heavily rely on variable-sized perf buffer already; - specialized implementations (like a new printk ring buffer, [0]) with lots of printk-specific limitations and implications, that didn't seem to fit well for intended use with BPF programs. [0] https://lwn.net/Articles/779550/ Signed-off-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200529075424.3139988-2-andriin@fb.com Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
2c78ee898d |
bpf: Implement CAP_BPF
Implement permissions as stated in uapi/linux/capability.h In order to do that the verifier allow_ptr_leaks flag is split into four flags and they are set as: env->allow_ptr_leaks = bpf_allow_ptr_leaks(); env->bypass_spec_v1 = bpf_bypass_spec_v1(); env->bypass_spec_v4 = bpf_bypass_spec_v4(); env->bpf_capable = bpf_capable(); The first three currently equivalent to perfmon_capable(), since leaking kernel pointers and reading kernel memory via side channel attacks is roughly equivalent to reading kernel memory with cap_perfmon. 'bpf_capable' enables bounded loops, precision tracking, bpf to bpf calls and other verifier features. 'allow_ptr_leaks' enable ptr leaks, ptr conversions, subtraction of pointers. 'bypass_spec_v1' disables speculative analysis in the verifier, run time mitigations in bpf array, and enables indirect variable access in bpf programs. 'bypass_spec_v4' disables emission of sanitation code by the verifier. That means that the networking BPF program loaded with CAP_BPF + CAP_NET_ADMIN will have speculative checks done by the verifier and other spectre mitigation applied. Such networking BPF program will not be able to leak kernel pointers and will not be able to access arbitrary kernel memory. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Link: https://lore.kernel.org/bpf/20200513230355.7858-3-alexei.starovoitov@gmail.com |
||
![]() |
3f50f132d8 |
bpf: Verifier, do explicit ALU32 bounds tracking
It is not possible for the current verifier to track ALU32 and JMP ops correctly. This can result in the verifier aborting with errors even though the program should be verifiable. BPF codes that hit this can work around it by changin int variables to 64-bit types, marking variables volatile, etc. But this is all very ugly so it would be better to avoid these tricks. But, the main reason to address this now is do_refine_retval_range() was assuming return values could not be negative. Once we fixed this code that was previously working will no longer work. See do_refine_retval_range() patch for details. And we don't want to suddenly cause programs that used to work to fail. The simplest example code snippet that illustrates the problem is likely this, 53: w8 = w0 // r8 <- [0, S32_MAX], // w8 <- [-S32_MIN, X] 54: w8 <s 0 // r8 <- [0, U32_MAX] // w8 <- [0, X] The expected 64-bit and 32-bit bounds after each line are shown on the right. The current issue is without the w* bounds we are forced to use the worst case bound of [0, U32_MAX]. To resolve this type of case, jmp32 creating divergent 32-bit bounds from 64-bit bounds, we add explicit 32-bit register bounds s32_{min|max}_value and u32_{min|max}_value. Then from branch_taken logic creating new bounds we can track 32-bit bounds explicitly. The next case we observed is ALU ops after the jmp32, 53: w8 = w0 // r8 <- [0, S32_MAX], // w8 <- [-S32_MIN, X] 54: w8 <s 0 // r8 <- [0, U32_MAX] // w8 <- [0, X] 55: w8 += 1 // r8 <- [0, U32_MAX+1] // w8 <- [0, X+1] In order to keep the bounds accurate at this point we also need to track ALU32 ops. To do this we add explicit ALU32 logic for each of the ALU ops, mov, add, sub, etc. Finally there is a question of how and when to merge bounds. The cases enumerate here, 1. MOV ALU32 - zext 32-bit -> 64-bit 2. MOV ALU64 - copy 64-bit -> 32-bit 3. op ALU32 - zext 32-bit -> 64-bit 4. op ALU64 - n/a 5. jmp ALU32 - 64-bit: var32_off | upper_32_bits(var64_off) 6. jmp ALU64 - 32-bit: (>> (<< var64_off)) Details for each case, For "MOV ALU32" BPF arch zero extends so we simply copy the bounds from 32-bit into 64-bit ensuring we truncate var_off and 64-bit bounds correctly. See zext_32_to_64. For "MOV ALU64" copy all bounds including 32-bit into new register. If the src register had 32-bit bounds the dst register will as well. For "op ALU32" zero extend 32-bit into 64-bit the same as move, see zext_32_to_64. For "op ALU64" calculate both 32-bit and 64-bit bounds no merging is done here. Except we have a special case. When RSH or ARSH is done we can't simply ignore shifting bits from 64-bit reg into the 32-bit subreg. So currently just push bounds from 64-bit into 32-bit. This will be correct in the sense that they will represent a valid state of the register. However we could lose some accuracy if an ARSH is following a jmp32 operation. We can handle this special case in a follow up series. For "jmp ALU32" mark 64-bit reg unknown and recalculate 64-bit bounds from tnum by setting var_off to ((<<(>>var_off)) | var32_off). We special case if 64-bit bounds has zero'd upper 32bits at which point we can simply copy 32-bit bounds into 64-bit register. This catches a common compiler trick where upper 32-bits are zeroed and then 32-bit ops are used followed by a 64-bit compare or 64-bit op on a pointer. See __reg_combine_64_into_32(). For "jmp ALU64" cast the bounds of the 64bit to their 32-bit counterpart. For example s32_min_value = (s32)reg->smin_value. For tnum use only the lower 32bits via, (>>(<<var_off)). See __reg_combine_64_into_32(). Signed-off-by: John Fastabend <john.fastabend@gmail.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Link: https://lore.kernel.org/bpf/158560419880.10843.11448220440809118343.stgit@john-Precision-5820-Tower |
||
![]() |
51c39bb1d5 |
bpf: Introduce function-by-function verification
New llvm and old llvm with libbpf help produce BTF that distinguish global and static functions. Unlike arguments of static function the arguments of global functions cannot be removed or optimized away by llvm. The compiler has to use exactly the arguments specified in a function prototype. The argument type information allows the verifier validate each global function independently. For now only supported argument types are pointer to context and scalars. In the future pointers to structures, sizes, pointer to packet data can be supported as well. Consider the following example: static int f1(int ...) { ... } int f3(int b); int f2(int a) { f1(a) + f3(a); } int f3(int b) { ... } int main(...) { f1(...) + f2(...) + f3(...); } The verifier will start its safety checks from the first global function f2(). It will recursively descend into f1() because it's static. Then it will check that arguments match for the f3() invocation inside f2(). It will not descend into f3(). It will finish f2() that has to be successfully verified for all possible values of 'a'. Then it will proceed with f3(). That function also has to be safe for all possible values of 'b'. Then it will start subprog 0 (which is main() function). It will recursively descend into f1() and will skip full check of f2() and f3(), since they are global. The order of processing global functions doesn't affect safety, since all global functions must be proven safe based on their arguments only. Such function by function verification can drastically improve speed of the verification and reduce complexity. Note that the stack limit of 512 still applies to the call chain regardless whether functions were static or global. The nested level of 8 also still applies. The same recursion prevention checks are in place as well. The type information and static/global kind is preserved after the verification hence in the above example global function f2() and f3() can be replaced later by equivalent functions with the same types that are loaded and verified later without affecting safety of this main() program. Such replacement (re-linking) of global functions is a subject of future patches. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Link: https://lore.kernel.org/bpf/20200110064124.1760511-3-ast@kernel.org |
||
![]() |
d2e4c1e6c2 |
bpf: Constant map key tracking for prog array pokes
Add tracking of constant keys into tail call maps. The signature of bpf_tail_call_proto is that arg1 is ctx, arg2 map pointer and arg3 is a index key. The direct call approach for tail calls can be enabled if the verifier asserted that for all branches leading to the tail call helper invocation, the map pointer and index key were both constant and the same. Tracking of map pointers we already do from prior work via |
||
![]() |
8c1b6e69dc |
bpf: Compare BTF types of functions arguments with actual types
Make the verifier check that BTF types of function arguments match actual types passed into top-level BPF program and into BPF-to-BPF calls. If types match such BPF programs and sub-programs will have full support of BPF trampoline. If types mismatch the trampoline has to be conservative. It has to save/restore five program arguments and assume 64-bit scalars. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Song Liu <songliubraving@fb.com> Acked-by: Andrii Nakryiko <andriin@fb.com> Link: https://lore.kernel.org/bpf/20191114185720.1641606-17-ast@kernel.org |
||
![]() |
9e15db6613 |
bpf: Implement accurate raw_tp context access via BTF
libbpf analyzes bpf C program, searches in-kernel BTF for given type name and stores it into expected_attach_type. The kernel verifier expects this btf_id to point to something like: typedef void (*btf_trace_kfree_skb)(void *, struct sk_buff *skb, void *loc); which represents signature of raw_tracepoint "kfree_skb". Then btf_ctx_access() matches ctx+0 access in bpf program with 'skb' and 'ctx+8' access with 'loc' arguments of "kfree_skb" tracepoint. In first case it passes btf_id of 'struct sk_buff *' back to the verifier core and 'void *' in second case. Then the verifier tracks PTR_TO_BTF_ID as any other pointer type. Like PTR_TO_SOCKET points to 'struct bpf_sock', PTR_TO_TCP_SOCK points to 'struct bpf_tcp_sock', and so on. PTR_TO_BTF_ID points to in-kernel structs. If 1234 is btf_id of 'struct sk_buff' in vmlinux's BTF then PTR_TO_BTF_ID#1234 points to one of in kernel skbs. When PTR_TO_BTF_ID#1234 is dereferenced (like r2 = *(u64 *)r1 + 32) the btf_struct_access() checks which field of 'struct sk_buff' is at offset 32. Checks that size of access matches type definition of the field and continues to track the dereferenced type. If that field was a pointer to 'struct net_device' the r2's type will be PTR_TO_BTF_ID#456. Where 456 is btf_id of 'struct net_device' in vmlinux's BTF. Such verifier analysis prevents "cheating" in BPF C program. The program cannot cast arbitrary pointer to 'struct sk_buff *' and access it. C compiler would allow type cast, of course, but the verifier will notice type mismatch based on BPF assembly and in-kernel BTF. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191016032505.2089704-7-ast@kernel.org |
||
![]() |
8580ac9404 |
bpf: Process in-kernel BTF
If in-kernel BTF exists parse it and prepare 'struct btf *btf_vmlinux' for further use by the verifier. In-kernel BTF is trusted just like kallsyms and other build artifacts embedded into vmlinux. Yet run this BTF image through BTF verifier to make sure that it is valid and it wasn't mangled during the build. If in-kernel BTF is incorrect it means either gcc or pahole or kernel are buggy. In such case disallow loading BPF programs. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Andrii Nakryiko <andriin@fb.com> Acked-by: Martin KaFai Lau <kafai@fb.com> Link: https://lore.kernel.org/bpf/20191016032505.2089704-4-ast@kernel.org |
||
![]() |
10d274e880 |
bpf: introduce verifier internal test flag
Introduce BPF_F_TEST_STATE_FREQ flag to stress test parentage chain and state pruning. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Song Liu <songliubraving@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> |
||
![]() |
dca73a65a6 |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next
Alexei Starovoitov says: ==================== pull-request: bpf-next 2019-06-19 The following pull-request contains BPF updates for your *net-next* tree. The main changes are: 1) new SO_REUSEPORT_DETACH_BPF setsocktopt, from Martin. 2) BTF based map definition, from Andrii. 3) support bpf_map_lookup_elem for xskmap, from Jonathan. 4) bounded loops and scalar precision logic in the verifier, from Alexei. ==================== Signed-off-by: David S. Miller <davem@davemloft.net> |
||
![]() |
b5dc0163d8 |
bpf: precise scalar_value tracking
Introduce precision tracking logic that helps cilium programs the most: old clang old clang new clang new clang with all patches with all patches bpf_lb-DLB_L3.o 1838 2283 1923 1863 bpf_lb-DLB_L4.o 3218 2657 3077 2468 bpf_lb-DUNKNOWN.o 1064 545 1062 544 bpf_lxc-DDROP_ALL.o 26935 23045 166729 22629 bpf_lxc-DUNKNOWN.o 34439 35240 174607 28805 bpf_netdev.o 9721 8753 8407 6801 bpf_overlay.o 6184 7901 5420 4754 bpf_lxc_jit.o 39389 50925 39389 50925 Consider code: 654: (85) call bpf_get_hash_recalc#34 655: (bf) r7 = r0 656: (15) if r8 == 0x0 goto pc+29 657: (bf) r2 = r10 658: (07) r2 += -48 659: (18) r1 = 0xffff8881e41e1b00 661: (85) call bpf_map_lookup_elem#1 662: (15) if r0 == 0x0 goto pc+23 663: (69) r1 = *(u16 *)(r0 +0) 664: (15) if r1 == 0x0 goto pc+21 665: (bf) r8 = r7 666: (57) r8 &= 65535 667: (bf) r2 = r8 668: (3f) r2 /= r1 669: (2f) r2 *= r1 670: (bf) r1 = r8 671: (1f) r1 -= r2 672: (57) r1 &= 255 673: (25) if r1 > 0x1e goto pc+12 R0=map_value(id=0,off=0,ks=20,vs=64,imm=0) R1_w=inv(id=0,umax_value=30,var_off=(0x0; 0x1f)) 674: (67) r1 <<= 1 675: (0f) r0 += r1 At this point the verifier will notice that scalar R1 is used in map pointer adjustment. R1 has to be precise for later operations on R0 to be validated properly. The verifier will backtrack the above code in the following way: last_idx 675 first_idx 664 regs=2 stack=0 before 675: (0f) r0 += r1 // started backtracking R1 regs=2 is a bitmask regs=2 stack=0 before 674: (67) r1 <<= 1 regs=2 stack=0 before 673: (25) if r1 > 0x1e goto pc+12 regs=2 stack=0 before 672: (57) r1 &= 255 regs=2 stack=0 before 671: (1f) r1 -= r2 // now both R1 and R2 has to be precise -> regs=6 mask regs=6 stack=0 before 670: (bf) r1 = r8 // after this insn R8 and R2 has to be precise regs=104 stack=0 before 669: (2f) r2 *= r1 // after this one R8, R2, and R1 regs=106 stack=0 before 668: (3f) r2 /= r1 regs=106 stack=0 before 667: (bf) r2 = r8 regs=102 stack=0 before 666: (57) r8 &= 65535 regs=102 stack=0 before 665: (bf) r8 = r7 regs=82 stack=0 before 664: (15) if r1 == 0x0 goto pc+21 // this is the end of verifier state. The following regs will be marked precised: R1_rw=invP(id=0,umax_value=65535,var_off=(0x0; 0xffff)) R7_rw=invP(id=0) parent didn't have regs=82 stack=0 marks // so backtracking continues into parent state last_idx 663 first_idx 655 regs=82 stack=0 before 663: (69) r1 = *(u16 *)(r0 +0) // R1 was assigned no need to track it further regs=80 stack=0 before 662: (15) if r0 == 0x0 goto pc+23 // keep tracking R7 regs=80 stack=0 before 661: (85) call bpf_map_lookup_elem#1 // keep tracking R7 regs=80 stack=0 before 659: (18) r1 = 0xffff8881e41e1b00 regs=80 stack=0 before 658: (07) r2 += -48 regs=80 stack=0 before 657: (bf) r2 = r10 regs=80 stack=0 before 656: (15) if r8 == 0x0 goto pc+29 regs=80 stack=0 before 655: (bf) r7 = r0 // here the assignment into R7 // mark R0 to be precise: R0_rw=invP(id=0) parent didn't have regs=1 stack=0 marks // regs=1 -> tracking R0 last_idx 654 first_idx 644 regs=1 stack=0 before 654: (85) call bpf_get_hash_recalc#34 // and in the parent frame it was a return value // nothing further to backtrack Two scalar registers not marked precise are equivalent from state pruning point of view. More details in the patch comments. It doesn't support bpf2bpf calls yet and enabled for root only. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> |
||
![]() |
2589726d12 |
bpf: introduce bounded loops
Allow the verifier to validate the loops by simulating their execution. Exisiting programs have used '#pragma unroll' to unroll the loops by the compiler. Instead let the verifier simulate all iterations of the loop. In order to do that introduce parentage chain of bpf_verifier_state and 'branches' counter for the number of branches left to explore. See more detailed algorithm description in bpf_verifier.h This algorithm borrows the key idea from Edward Cree approach: https://patchwork.ozlabs.org/patch/877222/ Additional state pruning heuristics make such brute force loop walk practical even for large loops. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Andrii Nakryiko <andriin@fb.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> |
||
![]() |
a6cdeeb16b |
Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
Some ISDN files that got removed in net-next had some changes done in mainline, take the removals. Signed-off-by: David S. Miller <davem@davemloft.net> |
||
![]() |
25763b3c86 |
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 206
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of version 2 of the gnu general public license as published by the free software foundation extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 107 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Richard Fontana <rfontana@redhat.com> Reviewed-by: Steve Winslow <swinslow@gmail.com> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190528171438.615055994@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> |
||
![]() |
5327ed3d44 |
bpf: verifier: mark verified-insn with sub-register zext flag
eBPF ISA specification requires high 32-bit cleared when low 32-bit sub-register is written. This applies to destination register of ALU32 etc. JIT back-ends must guarantee this semantic when doing code-gen. x86_64 and AArch64 ISA has the same semantics, so the corresponding JIT back-end doesn't need to do extra work. However, 32-bit arches (arm, x86, nfp etc.) and some other 64-bit arches (PowerPC, SPARC etc) need to do explicit zero extension to meet this requirement, otherwise code like the following will fail. u64_value = (u64) u32_value ... other uses of u64_value This is because compiler could exploit the semantic described above and save those zero extensions for extending u32_value to u64_value, these JIT back-ends are expected to guarantee this through inserting extra zero extensions which however could be a significant increase on the code size. Some benchmarks show there could be ~40% sub-register writes out of total insns, meaning at least ~40% extra code-gen. One observation is these extra zero extensions are not always necessary. Take above code snippet for example, it is possible u32_value will never be casted into a u64, the value of high 32-bit of u32_value then could be ignored and extra zero extension could be eliminated. This patch implements this idea, insns defining sub-registers will be marked when the high 32-bit of the defined sub-register matters. For those unmarked insns, it is safe to eliminate high 32-bit clearnace for them. Algo: - Split read flags into READ32 and READ64. - Record index of insn that does sub-register write. Keep the index inside reg state and update it during verifier insn walking. - A full register read on a sub-register marks its definition insn as needing zero extension on dst register. A new sub-register write overrides the old one. - When propagating read64 during path pruning, also mark any insn defining a sub-register that is read in the pruned path as full-register. Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Jiong Wang <jiong.wang@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
dc2a4ebc0b |
bpf: convert explored_states to hash table
All prune points inside a callee bpf function most likely will have different callsites. For example, if function foo() is called from two callsites the half of explored states in all prune points in foo() will be useless for subsequent walking of one of those callsites. Fortunately explored_states pruning heuristics keeps the number of states per prune point small, but walking these states is still a waste of cpu time when the callsite of the current state is different from the callsite of the explored state. To improve pruning logic convert explored_states into hash table and use simple insn_idx ^ callsite hash to select hash bucket. This optimization has no effect on programs without bpf2bpf calls and drastically improves programs with calls. In the later case it reduces total memory consumption in 1M scale tests by almost 3 times (peak_states drops from 5752 to 2016). Care should be taken when comparing the states for equivalency. Since the same hash bucket can now contain states with different indices the insn_idx has to be part of verifier_state and compared. Different hash table sizes and different hash functions were explored, but the results were not significantly better vs this patch. They can be improved in the future. Hit/miss heuristic is not counting index miscompare as a miss. Otherwise verifier stats become unstable when experimenting with different hash functions. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> |
||
![]() |
a8f500af0c |
bpf: split explored_states
split explored_states into prune_point boolean mark and link list of explored states. This removes STATE_LIST_MARK hack and allows marks to be separate from states. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> |
||
![]() |
7df737e991 |
bpf: remove global variables
Move three global variables protected by bpf_verifier_lock into 'struct bpf_verifier_env' to allow parallel verification. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> |
||
![]() |
d8eca5bbb2 |
bpf: implement lookup-free direct value access for maps
This generic extension to BPF maps allows for directly loading an address residing inside a BPF map value as a single BPF ldimm64 instruction! The idea is similar to what BPF_PSEUDO_MAP_FD does today, which is a special src_reg flag for ldimm64 instruction that indicates that inside the first part of the double insns's imm field is a file descriptor which the verifier then replaces as a full 64bit address of the map into both imm parts. For the newly added BPF_PSEUDO_MAP_VALUE src_reg flag, the idea is the following: the first part of the double insns's imm field is again a file descriptor corresponding to the map, and the second part of the imm field is an offset into the value. The verifier will then replace both imm parts with an address that points into the BPF map value at the given value offset for maps that support this operation. Currently supported is array map with single entry. It is possible to support more than just single map element by reusing both 16bit off fields of the insns as a map index, so full array map lookup could be expressed that way. It hasn't been implemented here due to lack of concrete use case, but could easily be done so in future in a compatible way, since both off fields right now have to be 0 and would correctly denote a map index 0. The BPF_PSEUDO_MAP_VALUE is a distinct flag as otherwise with BPF_PSEUDO_MAP_FD we could not differ offset 0 between load of map pointer versus load of map's value at offset 0, and changing BPF_PSEUDO_MAP_FD's encoding into off by one to differ between regular map pointer and map value pointer would add unnecessary complexity and increases barrier for debugability thus less suitable. Using the second part of the imm field as an offset into the value does /not/ come with limitations since maximum possible value size is in u32 universe anyway. This optimization allows for efficiently retrieving an address to a map value memory area without having to issue a helper call which needs to prepare registers according to calling convention, etc, without needing the extra NULL test, and without having to add the offset in an additional instruction to the value base pointer. The verifier then treats the destination register as PTR_TO_MAP_VALUE with constant reg->off from the user passed offset from the second imm field, and guarantees that this is within bounds of the map value. Any subsequent operations are normally treated as typical map value handling without anything extra needed from verification side. The two map operations for direct value access have been added to array map for now. In future other types could be supported as well depending on the use case. The main use case for this commit is to allow for BPF loader support for global variables that reside in .data/.rodata/.bss sections such that we can directly load the address of them with minimal additional infrastructure required. Loader support has been added in subsequent commits for libbpf library. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
9f4686c41b |
bpf: improve verification speed by droping states
Branch instructions, branch targets and calls in a bpf program are the places where the verifier remembers states that led to successful verification of the program. These states are used to prune brute force program analysis. For unprivileged programs there is a limit of 64 states per such 'branching' instructions (maximum length is tracked by max_states_per_insn counter introduced in the previous patch). Simply reducing this threshold to 32 or lower increases insn_processed metric to the point that small valid programs get rejected. For root programs there is no limit and cilium programs can have max_states_per_insn to be 100 or higher. Walking 100+ states multiplied by number of 'branching' insns during verification consumes significant amount of cpu time. Turned out simple LRU-like mechanism can be used to remove states that unlikely will be helpful in future search pruning. This patch introduces hit_cnt and miss_cnt counters: hit_cnt - this many times this state successfully pruned the search miss_cnt - this many times this state was not equivalent to other states (and that other states were added to state list) The heuristic introduced in this patch is: if (sl->miss_cnt > sl->hit_cnt * 3 + 3) /* drop this state from future considerations */ Higher numbers increase max_states_per_insn (allow more states to be considered for pruning) and slow verification speed, but do not meaningfully reduce insn_processed metric. Lower numbers drop too many states and insn_processed increases too much. Many different formulas were considered. This one is simple and works well enough in practice. (the analysis was done on selftests/progs/* and on cilium programs) The end result is this heuristic improves verification speed by 10 times. Large synthetic programs that used to take a second more now take 1/10 of a second. In cases where max_states_per_insn used to be 100 or more, now it's ~10. There is a slight increase in insn_processed for cilium progs: before after bpf_lb-DLB_L3.o 1831 1838 bpf_lb-DLB_L4.o 3029 3218 bpf_lb-DUNKNOWN.o 1064 1064 bpf_lxc-DDROP_ALL.o 26309 26935 bpf_lxc-DUNKNOWN.o 33517 34439 bpf_netdev.o 9713 9721 bpf_overlay.o 6184 6184 bpf_lcx_jit.o 37335 39389 And 2-3 times improvement in the verification speed. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> |
||
![]() |
06ee7115b0 |
bpf: add verifier stats and log_level bit 2
In order to understand the verifier bottlenecks add various stats and extend log_level: log_level 1 and 2 are kept as-is: bit 0 - level=1 - print every insn and verifier state at branch points bit 1 - level=2 - print every insn and verifier state at every insn bit 2 - level=4 - print verifier error and stats at the end of verification When verifier rejects the program the libbpf is trying to load the program twice. Once with log_level=0 (no messages, only error code is reported to user space) and second time with log_level=1 to tell the user why the verifier rejected it. With introduction of bit 2 - level=4 the libbpf can choose to always use that level and load programs once, since the verification speed is not affected and in case of error the verbose message will be available. Note that the verifier stats are not part of uapi just like all other verbose messages. They're expected to change in the future. Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> |
||
![]() |
1b98658968 |
bpf: Fix bpf_tcp_sock and bpf_sk_fullsock issue related to bpf_sk_release
Lorenz Bauer [thanks!] reported that a ptr returned by bpf_tcp_sock(sk)
can still be accessed after bpf_sk_release(sk).
Both bpf_tcp_sock() and bpf_sk_fullsock() have the same issue.
This patch addresses them together.
A simple reproducer looks like this:
sk = bpf_sk_lookup_tcp();
/* if (!sk) ... */
tp = bpf_tcp_sock(sk);
/* if (!tp) ... */
bpf_sk_release(sk);
snd_cwnd = tp->snd_cwnd; /* oops! The verifier does not complain. */
The problem is the verifier did not scrub the register's states of
the tcp_sock ptr (tp) after bpf_sk_release(sk).
[ Note that when calling bpf_tcp_sock(sk), the sk is not always
refcount-acquired. e.g. bpf_tcp_sock(skb->sk). The verifier works
fine for this case. ]
Currently, the verifier does not track if a helper's return ptr (in REG_0)
is "carry"-ing one of its argument's refcount status. To carry this info,
the reg1->id needs to be stored in reg0.
One approach was tried, like "reg0->id = reg1->id", when calling
"bpf_tcp_sock()". The main idea was to avoid adding another "ref_obj_id"
for the same reg. However, overlapping the NULL marking and ref
tracking purpose in one "id" does not work well:
ref_sk = bpf_sk_lookup_tcp();
fullsock = bpf_sk_fullsock(ref_sk);
tp = bpf_tcp_sock(ref_sk);
if (!fullsock) {
bpf_sk_release(ref_sk);
return 0;
}
/* fullsock_reg->id is marked for NOT-NULL.
* Same for tp_reg->id because they have the same id.
*/
/* oops. verifier did not complain about the missing !tp check */
snd_cwnd = tp->snd_cwnd;
Hence, a new "ref_obj_id" is needed in "struct bpf_reg_state".
With a new ref_obj_id, when bpf_sk_release(sk) is called, the verifier can
scrub all reg states which has a ref_obj_id match. It is done with the
changes in release_reg_references() in this patch.
While fixing it, sk_to_full_sk() is removed from bpf_tcp_sock() and
bpf_sk_fullsock() to avoid these helpers from returning
another ptr. It will make bpf_sk_release(tp) possible:
sk = bpf_sk_lookup_tcp();
/* if (!sk) ... */
tp = bpf_tcp_sock(sk);
/* if (!tp) ... */
bpf_sk_release(tp);
A separate helper "bpf_get_listener_sock()" will be added in a later
patch to do sk_to_full_sk().
Misc change notes:
- To allow bpf_sk_release(tp), the arg of bpf_sk_release() is changed
from ARG_PTR_TO_SOCKET to ARG_PTR_TO_SOCK_COMMON. ARG_PTR_TO_SOCKET
is removed from bpf.h since no helper is using it.
- arg_type_is_refcounted() is renamed to arg_type_may_be_refcounted()
because ARG_PTR_TO_SOCK_COMMON is the only one and skb->sk is not
refcounted. All bpf_sk_release(), bpf_sk_fullsock() and bpf_tcp_sock()
take ARG_PTR_TO_SOCK_COMMON.
- check_refcount_ok() ensures is_acquire_function() cannot take
arg_type_may_be_refcounted() as its argument.
- The check_func_arg() can only allow one refcount-ed arg. It is
guaranteed by check_refcount_ok() which ensures at most one arg can be
refcounted. Hence, it is a verifier internal error if >1 refcount arg
found in check_func_arg().
- In release_reference(), release_reference_state() is called
first to ensure a match on "reg->ref_obj_id" can be found before
scrubbing the reg states with release_reg_references().
- reg_is_refcounted() is no longer needed.
1. In mark_ptr_or_null_regs(), its usage is replaced by
"ref_obj_id && ref_obj_id == id" because,
when is_null == true, release_reference_state() should only be
called on the ref_obj_id obtained by a acquire helper (i.e.
is_acquire_function() == true). Otherwise, the following
would happen:
sk = bpf_sk_lookup_tcp();
/* if (!sk) { ... } */
fullsock = bpf_sk_fullsock(sk);
if (!fullsock) {
/*
* release_reference_state(fullsock_reg->ref_obj_id)
* where fullsock_reg->ref_obj_id == sk_reg->ref_obj_id.
*
* Hence, the following bpf_sk_release(sk) will fail
* because the ref state has already been released in the
* earlier release_reference_state(fullsock_reg->ref_obj_id).
*/
bpf_sk_release(sk);
}
2. In release_reg_references(), the current reg_is_refcounted() call
is unnecessary because the id check is enough.
- The type_is_refcounted() and type_is_refcounted_or_null()
are no longer needed also because reg_is_refcounted() is removed.
Fixes:
|
||
![]() |
d83525ca62 |
bpf: introduce bpf_spin_lock
Introduce 'struct bpf_spin_lock' and bpf_spin_lock/unlock() helpers to let bpf program serialize access to other variables. Example: struct hash_elem { int cnt; struct bpf_spin_lock lock; }; struct hash_elem * val = bpf_map_lookup_elem(&hash_map, &key); if (val) { bpf_spin_lock(&val->lock); val->cnt++; bpf_spin_unlock(&val->lock); } Restrictions and safety checks: - bpf_spin_lock is only allowed inside HASH and ARRAY maps. - BTF description of the map is mandatory for safety analysis. - bpf program can take one bpf_spin_lock at a time, since two or more can cause dead locks. - only one 'struct bpf_spin_lock' is allowed per map element. It drastically simplifies implementation yet allows bpf program to use any number of bpf_spin_locks. - when bpf_spin_lock is taken the calls (either bpf2bpf or helpers) are not allowed. - bpf program must bpf_spin_unlock() before return. - bpf program can access 'struct bpf_spin_lock' only via bpf_spin_lock()/bpf_spin_unlock() helpers. - load/store into 'struct bpf_spin_lock lock;' field is not allowed. - to use bpf_spin_lock() helper the BTF description of map value must be a struct and have 'struct bpf_spin_lock anyname;' field at the top level. Nested lock inside another struct is not allowed. - syscall map_lookup doesn't copy bpf_spin_lock field to user space. - syscall map_update and program map_update do not update bpf_spin_lock field. - bpf_spin_lock cannot be on the stack or inside networking packet. bpf_spin_lock can only be inside HASH or ARRAY map value. - bpf_spin_lock is available to root only and to all program types. - bpf_spin_lock is not allowed in inner maps of map-in-map. - ld_abs is not allowed inside spin_lock-ed region. - tracing progs and socket filter progs cannot use bpf_spin_lock due to insufficient preemption checks Implementation details: - cgroup-bpf class of programs can nest with xdp/tc programs. Hence bpf_spin_lock is equivalent to spin_lock_irqsave. Other solutions to avoid nested bpf_spin_lock are possible. Like making sure that all networking progs run with softirq disabled. spin_lock_irqsave is the simplest and doesn't add overhead to the programs that don't use it. - arch_spinlock_t is used when its implemented as queued_spin_lock - archs can force their own arch_spinlock_t - on architectures where queued_spin_lock is not available and sizeof(arch_spinlock_t) != sizeof(__u32) trivial lock is used. - presence of bpf_spin_lock inside map value could have been indicated via extra flag during map_create, but specifying it via BTF is cleaner. It provides introspection for map key/value and reduces user mistakes. Next steps: - allow bpf_spin_lock in other map types (like cgroup local storage) - introduce BPF_F_LOCK flag for bpf_map_update() syscall and helper to request kernel to grab bpf_spin_lock before rewriting the value. That will serialize access to map elements. Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> |
||
![]() |
08ca90afba |
bpf: notify offload JITs about optimizations
Let offload JITs know when instructions are replaced and optimized out, so they can update their state appropriately. The optimizations are best effort, if JIT returns an error from any callback verifier will stop notifying it as state may now be out of sync, but the verifier continues making progress. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
9e4c24e7ee |
bpf: verifier: record original instruction index
The communication between the verifier and advanced JITs is based on instruction indexes. We have to keep them stable throughout the optimizations otherwise referring to a particular instruction gets messy quickly. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Reviewed-by: Quentin Monnet <quentin.monnet@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
d3bd7413e0 |
bpf: fix sanitation of alu op with pointer / scalar type from different paths
While |
||
![]() |
979d63d50c |
bpf: prevent out of bounds speculation on pointer arithmetic
Jann reported that the original commit back in |
||
![]() |
c08435ec7f |
bpf: move {prev_,}insn_idx into verifier env
Move prev_insn_idx and insn_idx from the do_check() function into the verifier environment, so they can be read inside the various helper functions for handling the instructions. It's easier to put this into the environment rather than changing all call-sites only to pass it along. insn_idx is useful in particular since this later on allows to hold state in env->insn_aux_data[env->insn_idx]. Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
9242b5f561 |
bpf: add self-check logic to liveness analysis
Introduce REG_LIVE_DONE to check the liveness propagation and prepare the states for merging. See algorithm description in clean_live_states(). Signed-off-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net> |
||
![]() |
d9762e84ed |
bpf: verbose log bpf_line_info in verifier
This patch adds bpf_line_info during the verifier's verbose. It can give error context for debug purpose. ~~~~~~~~~~ Here is the verbose log for backedge: while (a) { a += bpf_get_smp_processor_id(); bpf_trace_printk(fmt, sizeof(fmt), a); } ~> bpftool prog load ./test_loop.o /sys/fs/bpf/test_loop type tracepoint 13: while (a) { 3: a += bpf_get_smp_processor_id(); back-edge from insn 13 to 3 ~~~~~~~~~~ Here is the verbose log for invalid pkt access: Modification to test_xdp_noinline.c: data = (void *)(long)xdp->data; data_end = (void *)(long)xdp->data_end; /* if (data + 4 > data_end) return XDP_DROP; */ *(u32 *)data = dst->dst; ~> bpftool prog load ./test_xdp_noinline.o /sys/fs/bpf/test_xdp_noinline type xdp ; data = (void *)(long)xdp->data; 224: (79) r2 = *(u64 *)(r10 -112) 225: (61) r2 = *(u32 *)(r2 +0) ; *(u32 *)data = dst->dst; 226: (63) *(u32 *)(r2 +0) = r1 invalid access to packet, off=0 size=4, R2(id=0,off=0,r=0) R2 offset is outside of the packet Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
c454a46b5e |
bpf: Add bpf_line_info support
This patch adds bpf_line_info support. It accepts an array of bpf_line_info objects during BPF_PROG_LOAD. The "line_info", "line_info_cnt" and "line_info_rec_size" are added to the "union bpf_attr". The "line_info_rec_size" makes bpf_line_info extensible in the future. The new "check_btf_line()" ensures the userspace line_info is valid for the kernel to use. When the verifier is translating/patching the bpf_prog (through "bpf_patch_insn_single()"), the line_infos' insn_off is also adjusted by the newly added "bpf_adj_linfo()". If the bpf_prog is jited, this patch also provides the jited addrs (in aux->jited_linfo) for the corresponding line_info.insn_off. "bpf_prog_fill_jited_linfo()" is added to fill the aux->jited_linfo. It is currently called by the x86 jit. Other jits can also use "bpf_prog_fill_jited_linfo()" and it will be done in the followup patches. In the future, if it deemed necessary, a particular jit could also provide its own "bpf_prog_fill_jited_linfo()" implementation. A few "*line_info*" fields are added to the bpf_prog_info such that the user can get the xlated line_info back (i.e. the line_info with its insn_off reflecting the translated prog). The jited_line_info is available if the prog is jited. It is an array of __u64. If the prog is not jited, jited_line_info_cnt is 0. The verifier's verbose log with line_info will be done in a follow up patch. Signed-off-by: Martin KaFai Lau <kafai@fb.com> Acked-by: Yonghong Song <yhs@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
ba64e7d852 |
bpf: btf: support proper non-jit func info
Commit |
||
![]() |
838e96904f |
bpf: Introduce bpf_func_info
This patch added interface to load a program with the following additional information: . prog_btf_fd . func_info, func_info_rec_size and func_info_cnt where func_info will provide function range and type_id corresponding to each function. The func_info_rec_size is introduced in the UAPI to specify struct bpf_func_info size passed from user space. This intends to make bpf_func_info structure growable in the future. If the kernel gets a different bpf_func_info size from userspace, it will try to handle user request with part of bpf_func_info it can understand. In this patch, kernel can understand struct bpf_func_info { __u32 insn_offset; __u32 type_id; }; If user passed a bpf func_info record size of 16 bytes, the kernel can still handle part of records with the above definition. If verifier agrees with function range provided by the user, the bpf_prog ksym for each function will use the func name provided in the type_id, which is supposed to provide better encoding as it is not limited by 16 bytes program name limitation and this is better for bpf program which contains multiple subprograms. The bpf_prog_info interface is also extended to return btf_id, func_info, func_info_rec_size and func_info_cnt to userspace, so userspace can print out the function prototype for each xlated function. The insn_offset in the returned func_info corresponds to the insn offset for xlated functions. With other jit related fields in bpf_prog_info, userspace can also print out function prototypes for each jited function. Signed-off-by: Yonghong Song <yhs@fb.com> Signed-off-by: Martin KaFai Lau <kafai@fb.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |
||
![]() |
a40a26322a |
bpf: pass prog instead of env to bpf_prog_offload_verifier_prep()
Function bpf_prog_offload_verifier_prep(), called from the kernel BPF verifier to run a driver-specific callback for preparing for the verification step for offloaded programs, takes a pointer to a struct bpf_verifier_env object. However, no driver callback needs the whole structure at this time: the two drivers supporting this, nfp and netdevsim, only need a pointer to the struct bpf_prog instance held by env. Update the callback accordingly, on kernel side and in these two drivers. Signed-off-by: Quentin Monnet <quentin.monnet@netronome.com> Reviewed-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: Alexei Starovoitov <ast@kernel.org> |