Testpmd

发布时间: 更新时间: 总字数:2424 阅读时间:5m 作者: IP上海 分享 网址
专栏文章
  1. DPDK 初学者入门必读
  2. DPDK 源码安装
  3. Testpmd(当前)
  4. DPDK 常用术语

testpmd 应用可以用于以包转发模式(packet forwarding mode)测试 DPDK,也可以访问具有如 Flow Director 特性的硬件网卡。它也被用作说明如何使用 DPDK SDK构建更多功能的应用程序。

testpmd 支持的转发模式

testpmd> set fwd (io|mac|macswap|flowgen| \
                  rxonly|txonly|csum|icmpecho|noisy|5tswap|shared-rxq) (""|retry)

转发模式说明:

  • io:按 原始I/O 方式转发报文。这是最快的转发操作,因为它不访问数据包数据。这是默认模式
  • mac:在转发报文之前改变报文的源以太网地址和目的以太网地址。应用程序的默认行为是将源以太网地址设置为发送接口的地址,将目的地址设置为一个虚拟值(在init时设置)。用户可以通过命令行选项 eth-peereth-peers-configfile 指定目标目的以太网地址。目前还不能指定特定的源以太网地址
  • macswap:MAC交换转发模式。在转发数据包之前,交换数据包的源和目的以太网地址
  • flowgen:多流生成方式。发起大量流(具有不同的目的IP地址),并终止接收流量
  • rxonly:接收报文但不发送
  • txonly:不接收报文,生成并发送报文
  • csum:根据数据包上的卸载标志用硬件或软件方法改变校验和字段
  • icmpecho:接收大量报文,查找ICMP echo请求,如果有,返回ICMP echo应答
  • ieee1588:演示 L2 IEEE1588 V2 点对 RX 和 TX 的时间戳
  • noisy:吵闹的邻居模拟。模拟执行虚拟网络功能(VNF)的接收和发送数据包的客户机器的更真实的行为
  • 5tswap:Swap the source and destination of L2,L3,L4 if they exist.
  • shared-rxq:只接收共享的Rx队列。从mbuf解析数据包的源端口,并相应地更新流统计信息

参考:http://doc.dpdk.org/guides/testpmd_app_ug/testpmd_funcs.html#set-fwd

编译

执行 /usertools/dpdk-setup.sh 选择 [58] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd)

$ export RTE_SDK=/root/dpdk-stable-19.11.7/
$ export RTE_TARGET=x86_64-native-linux-gcc
$ ./usertools/dpdk-setup.sh
...
----------------------------------------------------------
 Step 3: Run test application for linux environment
----------------------------------------------------------
[57] Run test application ($RTE_TARGET/app/test)
[58] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd)

...

[65] Exit Script

Option: 58


  Enter hex bitmask of cores to execute testpmd app on
  Example: to execute app on cores 0 to 7, enter 0xff
bitmask: 0xc  # 二进制为 1100,表示把 CPU 的 3,4 分配给app
Launching app
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:02:06.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:100f net_e1000_em
EAL: PCI device 0000:03:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 15ad:7b0 net_vmxnet3
Interactive-mode selected
Failed to set MTU to 1500 for port 0
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 00:50:56:30:10:14
Checking link statuses...
Done
testpmd>
  • 查看 port 信息
testpmd> show port info 0

********************* Infos for port 0  *********************
MAC address: 00:50:56:30:10:14
Device name: 0000:03:00.0
Driver name: net_vmxnet3
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
  strip off, filter off, extend off, qinq strip off
Supported RSS offload flow types:
  ipv4
  ipv4-tcp
  ipv6
  ipv6-tcp
Minimum size of RX buffer: 1646
Maximum configurable length of RX packet: 16384
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 1
Max possible RX queues: 16
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 128
RXDs number alignment: 1
Current number of TX queues: 1
Max possible TX queues: 8
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 512
TXDs number alignment: 1
Max segment number per packet: 255
Max segment number per MTU/TSO: 16

TestPMD可以使用如下几种不同的转发模式。 输入/输出模式(INPUT/OUTPUT MODE) 此模式通常称为IO模式,是最常用的转发模式,也是TestPMD启动时的默认模式。 在IO模式下,CPU内核从一个端口接收数据包(Rx),并将其发送到另一个端口(Tx)。 如果需要的话,一个端口可同时用于接收和发送。 收包模式(RX-ONLY MODE) 在此模式下,应用程序会轮询Rx端口的数据包,然后直接释放而不发送。 它以这种方式充当数据包接收器。 发包模式(TX-ONLY MODE) 在此模式下,应用程序生成64字节的IP数据包,并从Tx端口发送出去。 它不接收数据包,仅作为数据包源。 后两种模式(收包模式和发包模式)对于单独检查收包或者发包非常有用。 除了这三种模式,TestPMD文档中还介绍了其他一些转发模式。 执行命令sudo ./build/app/testpmd –l 0,1,2 –n 4 – -i 在此例中, –l选项指定了逻辑核。核0用于管理命令行,核1和2将用于转发数据包。 -n选项用于指定系统的内存通道数。–(破折号)分开了EAL参数和应用程序参数。-i选项运行在交互模式(交互模式下通过命令行改变转发配置)

set fwd rxonly 收包模式 set fwd txonly 发包模式 show port stats all 查看端口统计信息 clear port stats all 清除端口统计信息 show config fwd 查看转发配置 set nbcore 2 启用两个核转发 查看转发配置 show config fwd

开始转发 start

查看端口统计信息show port stats all

结束转发stop

链接:https://www.jianshu.com/p/0ac29ac0936e

testpmd

$ testpmd
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
testpmd: No probed ethernet devices
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
EAL: Error - exiting with code: 1
  Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory

该问题是由于 hugepage 分配的太小导致,可以适当的增大 hugepage 即可正常。

$ testpmd --help
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes

Usage: testpmd [options]

EAL common options:
  -c COREMASK         Hexadecimal bitmask of cores to run on
  -l CORELIST         List of cores to run on
                      The argument format is <c1>[-c2][,c3[-c4],...]
                      where c1, c2, etc are core indexes between 0 and 128
  --lcores COREMAP    Map lcore set to physical cpu set
                      The argument format is
                            '<lcores[@cpus]>[<,lcores[@cpus]>...]'
                      lcores and cpus list are grouped by '(' and ')'
                      Within the group, '-' is used for range separator,
                      ',' is used for single number separator.
                      '( )' can be omitted for single element group,
                      '@' can be omitted if cpus and lcores have the same value
  -s SERVICE COREMASK Hexadecimal bitmask of cores to be used as service cores
  --master-lcore ID   Core ID that is used as master
  --mbuf-pool-ops-name Pool ops name for mbuf to use
  -n CHANNELS         Number of memory channels
  -m MB               Memory to allocate (see also --socket-mem)
  -r RANKS            Force number of memory ranks (don't detect)
  -b, --pci-blacklist Add a PCI device in black list.
                      Prevent EAL from using this PCI device. The argument
                      format is <domain:bus:devid.func>.
  -w, --pci-whitelist Add a PCI device in white list.
                      Only use the specified PCI devices. The argument format
                      is <[domain:]bus:devid.func>. This option can be present
                      several times (once per device).
                      [NOTE: PCI whitelist cannot be used with -b option]
  --vdev              Add a virtual device.
                      The argument format is <driver><id>[,key=val,...]
                      (ex: --vdev=net_pcap0,iface=eth2).
  --iova-mode   Set IOVA mode. 'pa' for IOVA_PA
                      'va' for IOVA_VA
  -d LIB.so|DIR       Add a driver or driver directory
                      (can be used multiple times)
  --vmware-tsc-map    Use VMware TSC map instead of native RDTSC
  --proc-type         Type of this process (primary|secondary|auto)
  --syslog            Set syslog facility
  --log-level=<int>   Set global log level
  --log-level=<type-match>:<int>
                      Set specific log level
  -v                  Display version information on startup
  -h, --help          This help
  --in-memory   Operate entirely in memory. This will
                      disable secondary process support

EAL options for DEBUG use only:
  --huge-unlink       Unlink hugepage files after init
  --no-huge           Use malloc instead of hugetlbfs
  --no-pci            Disable PCI
  --no-hpet           Disable HPET
  --no-shconf         No shared config (mmap'd files)

EAL Linux options:
  --socket-mem        Memory to allocate on sockets (comma separated values)
  --socket-limit      Limit memory allocation on sockets (comma separated values)
  --huge-dir          Directory where hugetlbfs is mounted
  --file-prefix       Prefix for hugepage filenames
  --base-virtaddr     Base virtual address
  --create-uio-dev    Create /dev/uioX (usually done by hotplug)
  --vfio-intr         Interrupt mode for VFIO (legacy|msi|msix)
  --legacy-mem        Legacy memory mode (no dynamic allocation, contiguous segments)
  --single-file-segments Put all hugepage memory in single files
  • 进入交互
$ testpmd -l 2,3 --socket-mem 1024 -n 4 --log-level=8 -- -i
EAL: Detected 4 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
testpmd: No probed ethernet devices
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done
testpmd>

-l:指定运行 TestPMDlcore,Core 1 用于管理命令行,Core 2、3 用于转发数据包 -n:指定系统的内存通道数 –:用于分开 EAL 参数和应用程序参数

  • 查看所有 Port 统计

testpmd> show port stats all

  • 检查转发配置

testpmd> show config fwd io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP allocation mode: native

  • must-read-for-dpdk-beginner

用户空间IO(UIO)驱动程序

testpmd> start tx_first

testpmd> stop

Home Archives Categories Tags Statistics
本文总阅读量 次 本站总访问量 次 本站总访客数