Drivers: hv: vmbus: Resolve more races involving init_vp_index()

init_vp_index() uses the (per-node) hv_numa_map[] masks to record the
CPUs allocated for channel interrupts at a given time, and distribute
the performance-critical channels across the available CPUs: in part.,
the mask of "candidate" target CPUs in a given NUMA node, for a newly
offered channel, is determined by XOR-ing the node's CPU mask and the
node's hv_numa_map.  This operation/mechanism assumes that no offline
CPUs is set in the hv_numa_map mask, an assumption that does not hold
since such mask is currently not updated when a channel is removed or
assigned to a different CPU.

To address the issues described above, this adds hooks in the channel
removal path (hv_process_channel_removal()) and in target_cpu_store()
in order to clear, resp. to update, the hv_numa_map[] masks as needed.
This also adds a (missed) update of the masks in init_vp_index() (cf.,
e.g., the memory-allocation failure path in this function).

Like in the case of init_vp_index(), such hooks require to determine
if the given channel is performance critical.  init_vp_index() does
this by parsing the channel's offer, it can not rely on the device
data structure (device_obj) to retrieve such information because the
device data structure has not been allocated/linked with the channel
by the time that init_vp_index() executes.  A similar situation may
hold in hv_is_alloced_cpu() (defined below); the adopted approach is
to "cache" the device type of the channel, as computed by parsing the
channel's offer, in the channel structure itself.

Fixes: 7527810573 ("Drivers: hv: vmbus: Introduce the CHANNELMSG_MODIFYCHANNEL message type")
Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@gmail.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Link: https://lore.kernel.org/r/20200522171901.204127-3-parri.andrea@gmail.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
This commit is contained in:
Andrea Parri (Microsoft)
2020-05-22 19:19:01 +02:00
committed by Wei Liu
parent a949e86c0d
commit afaa33da08
4 changed files with 84 additions and 12 deletions

View File

@@ -395,6 +395,54 @@ enum delay {
MESSAGE_DELAY = 1,
};
extern const struct vmbus_device vmbus_devs[];
static inline bool hv_is_perf_channel(struct vmbus_channel *channel)
{
return vmbus_devs[channel->device_id].perf_device;
}
static inline bool hv_is_alloced_cpu(unsigned int cpu)
{
struct vmbus_channel *channel, *sc;
lockdep_assert_held(&vmbus_connection.channel_mutex);
/*
* List additions/deletions as well as updates of the target CPUs are
* protected by channel_mutex.
*/
list_for_each_entry(channel, &vmbus_connection.chn_list, listentry) {
if (!hv_is_perf_channel(channel))
continue;
if (channel->target_cpu == cpu)
return true;
list_for_each_entry(sc, &channel->sc_list, sc_list) {
if (sc->target_cpu == cpu)
return true;
}
}
return false;
}
static inline void hv_set_alloced_cpu(unsigned int cpu)
{
cpumask_set_cpu(cpu, &hv_context.hv_numa_map[cpu_to_node(cpu)]);
}
static inline void hv_clear_alloced_cpu(unsigned int cpu)
{
if (hv_is_alloced_cpu(cpu))
return;
cpumask_clear_cpu(cpu, &hv_context.hv_numa_map[cpu_to_node(cpu)]);
}
static inline void hv_update_alloced_cpus(unsigned int old_cpu,
unsigned int new_cpu)
{
hv_set_alloced_cpu(new_cpu);
hv_clear_alloced_cpu(old_cpu);
}
#ifdef CONFIG_HYPERV_TESTING
int hv_debug_add_dev_dir(struct hv_device *dev);