Using dpdk-socket-mem to allocate memory for some NUMA nodes
but leaving blank for subsequent ones is equivalent of assigning
0 MB memory to those subsequent nodes. Document this behavior.
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
Signed-off-by: Ben Pfaff <blp@ovn.org>
If allocating more than one GB hugepage, you can configure the
amount of memory used from any given NUMA nodes. For example, to use 1GB from
If allocating more than one GB hugepage, you can configure the
amount of memory used from any given NUMA nodes. For example, to use 1GB from
+NUMA node 0 and 0GB for all other NUMA nodes, run::
$ ovs-vsctl --no-wait set Open_vSwitch . \
other_config:dpdk-socket-mem="1024,0"
$ ovs-vsctl --no-wait set Open_vSwitch . \
other_config:dpdk-socket-mem="1024,0"
+or::
+
+ $ ovs-vsctl --no-wait set Open_vSwitch . \
+ other_config:dpdk-socket-mem="1024"
+
Similarly, if you wish to better scale the workloads across cores, then
multiple pmd threads can be created and pinned to CPU cores by explicity
specifying ``pmd-cpu-mask``. Cores are numbered from 0, so to spawn two pmd
Similarly, if you wish to better scale the workloads across cores, then
multiple pmd threads can be created and pinned to CPU cores by explicity
specifying ``pmd-cpu-mask``. Cores are numbered from 0, so to spawn two pmd
</p>
<p>
The specifier is a comma-separated string, in ascending order of CPU
</p>
<p>
The specifier is a comma-separated string, in ascending order of CPU
- socket (ex: 1024,2048,4096,8192 would set socket 0 to preallocate
- 1024MB, socket 1 to preallocate 2048MB, etc.)
+ socket. E.g. On a four socket system 1024,0,2048 would set socket 0
+ to preallocate 1024MB, socket 1 to preallocate 0MB, socket 2 to
+ preallocate 2048MB and socket 3 (no value given) to preallocate 0MB.
</p>
<p>
If dpdk-socket-mem and dpdk-alloc-mem are not specified, dpdk-socket-mem
</p>
<p>
If dpdk-socket-mem and dpdk-alloc-mem are not specified, dpdk-socket-mem