x86, numa: Remove configurable node size support for numa emulation

Now that numa=fake=<size>[MG] is implemented, it is possible to remove
configurable node size support.  The command-line parsing was already
broken (numa=fake=*128, for example, would not work) and since fake nodes
are now interleaved over physical nodes, this support is no longer
required.

Signed-off-by: David Rientjes <rientjes@google.com>
LKML-Reference: <alpine.DEB.2.00.1002151343080.26927@chino.kir.corp.google.com>
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
This commit is contained in:
David Rientjes
2010-02-15 13:43:33 -08:00
committed by H. Peter Anvin
parent 8df5bb34de
commit ca2107c9d6
2 changed files with 17 additions and 161 deletions

View File

@@ -170,19 +170,9 @@ NUMA
If given as a memory unit, fills all system RAM with nodes of
size interleaved over physical nodes.
numa=fake=CMDLINE
If a number, fakes CMDLINE nodes and ignores NUMA setup of the
actual machine. Otherwise, system memory is configured
depending on the sizes and coefficients listed. For example:
numa=fake=2*512,1024,4*256,*128
gives two 512M nodes, a 1024M node, four 256M nodes, and the
rest split into 128M chunks. If the last character of CMDLINE
is a *, the remaining memory is divided up equally among its
coefficient:
numa=fake=2*512,2*
gives two 512M nodes and the rest split into two nodes.
Otherwise, the remaining system RAM is allocated to an
additional node.
numa=fake=<N>
If given as an integer, fills all system RAM with N fake nodes
interleaved over physical nodes.
ACPI