1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
2539
2540
2541
2542
2543
2544
2545
2546
2547
2548
2549
2550
2551
2552
2553
2554
2555
2556
2557
2558
2559
2560
2561
2562
2563
2564
2565
2566
2567
2568
2569
2570
2571
2572
2573
2574
2575
2576
2577
2578
2579
2580
2581
2582
2583
2584
2585
2586
2587
2588
2589
2590
2591
2592
2593
2594
2595
2596
2597
2598
2599
2600
2601
2602
2603
2604
2605
2606
2607
2608
2609
2610
2611
2612
2613
2614
2615
2616
2617
2618
2619
2620
2621
2622
2623
2624
2625
2626
2627
2628
2629
2630
2631
2632
2633
2634
2635
2636
2637
2638
2639
2640
2641
2642
2643
2644
2645
2646
2647
2648
2649
2650
2651
2652
2653
2654
2655
2656
2657
2658
2659
2660
2661
2662
2663
2664
2665
2666
2667
2668
2669
2670
2671
2672
2673
2674
2675
2676
2677
2678
2679
2680
2681
2682
2683
2684
2685
2686
2687
2688
2689
2690
2691
2692
2693
2694
2695
2696
2697
2698
2699
2700
2701
2702
2703
2704
2705
2706
2707
2708
2709
2710
2711
2712
2713
2714
2715
2716
2717
2718
2719
2720
2721
2722
2723
2724
2725
2726
2727
2728
2729
2730
2731
2732
2733
2734
2735
2736
2737
2738
2739
2740
2741
2742
2743
2744
2745
2746
2747
2748
2749
2750
2751
2752
2753
2754
2755
2756
2757
2758
2759
2760
2761
2762
2763
2764
2765
2766
2767
2768
2769
2770
2771
2772
2773
2774
2775
2776
2777
2778
2779
2780
2781
2782
2783
2784
2785
2786
2787
2788
2789
2790
2791
2792
2793
2794
2795
2796
2797
2798
2799
2800
2801
2802
2803
2804
2805
2806
2807
2808
2809
2810
2811
2812
2813
2814
2815
2816
2817
2818
2819
2820
2821
2822
2823
2824
2825
2826
2827
2828
2829
2830
2831
2832
2833
2834
2835
2836
2837
2838
2839
2840
2841
2842
2843
2844
2845
2846
2847
2848
2849
2850
2851
2852
2853
2854
2855
2856
2857
2858
2859
2860
2861
2862
2863
2864
2865
2866
2867
2868
2869
2870
2871
2872
2873
2874
2875
2876
2877
2878
2879
2880
2881
2882
2883
2884
2885
2886
2887
2888
2889
2890
2891
2892
2893
2894
2895
2896
2897
2898
2899
2900
2901
2902
2903
2904
2905
2906
2907
2908
2909
2910
2911
2912
2913
2914
2915
2916
2917
2918
2919
2920
2921
2922
2923
2924
2925
2926
2927
2928
2929
2930
2931
2932
2933
2934
2935
2936
2937
2938
2939
2940
2941
2942
2943
2944
2945
2946
2947
2948
2949
2950
2951
2952
2953
2954
2955
2956
2957
2958
2959
2960
2961
2962
2963
2964
2965
2966
2967
2968
2969
2970
2971
2972
2973
2974
2975
2976
2977
2978
2979
2980
2981
2982
2983
2984
2985
2986
2987
2988
2989
2990
2991
2992
2993
2994
2995
2996
2997
2998
2999
3000
3001
3002
3003
3004
3005
3006
3007
3008
3009
3010
3011
3012
3013
3014
3015
3016
3017
3018
3019
3020
3021
3022
3023
3024
3025
3026
3027
3028
3029
3030
3031
3032
3033
3034
3035
3036
3037
3038
3039
3040
3041
3042
3043
3044
3045
3046
3047
3048
3049
3050
3051
3052
3053
3054
3055
3056
3057
3058
3059
3060
3061
3062
3063
3064
3065
3066
3067
3068
3069
3070
3071
3072
3073
3074
3075
3076
3077
3078
3079
3080
3081
3082
3083
3084
3085
3086
3087
3088
3089
3090
3091
3092
3093
3094
3095
3096
3097
3098
3099
3100
3101
3102
3103
3104
3105
3106
3107
3108
3109
3110
3111
3112
3113
3114
3115
3116
3117
3118
3119
3120
3121
3122
3123
3124
3125
3126
3127
3128
3129
3130
3131
3132
3133
3134
3135
3136
3137
3138
3139
3140
3141
3142
3143
3144
3145
3146
3147
3148
3149
3150
3151
3152
3153
3154
3155
3156
3157
3158
3159
3160
3161
3162
3163
3164
3165
3166
3167
3168
3169
3170
3171
3172
3173
3174
3175
3176
3177
3178
3179
3180
3181
3182
3183
3184
3185
3186
3187
3188
3189
3190
3191
3192
3193
3194
3195
3196
3197
3198
3199
3200
3201
3202
3203
3204
3205
3206
3207
3208
3209
3210
3211
3212
3213
3214
3215
3216
3217
3218
3219
3220
3221
3222
3223
3224
3225
|
TRex
====
:author: hhaim
:email: <hhaim@cisco.com>
:revnumber: 2.1
:quotes.++:
:numbered:
:web_server_url: http://trex-tgn.cisco.com/trex
:local_web_server_url: csi-wiki-01:8181/trex
:toclevels: 4
include::trex_ga.asciidoc[]
== Introduction
=== A word on traffic generators
Traditionally, routers have been tested using commercial traffic generators, while performance
typically has been measured using packets per second (PPS) metrics. As router functionality and
services became more complex, stateful traffic generators now need to provide more realistic traffic scenarios.
Advantages of realistic traffic generators:
* Accurate performance metrics.
* Discovering bottlenecks in realistic traffic scenarios.
==== Current Challenges:
* *Cost*: Commercial stateful traffic generators are very expensive.
* *Scale*: Bandwidth does not scale up well with feature complexity.
* *Standardization*: Lack of standardization of traffic patterns and methodologies.
* *Flexibility*: Commercial tools do not allow agility when flexibility and changes are needed.
==== Implications
* High capital expenditure (capex) spent by different teams.
* Testing in low scale and extrapolation became a common practice. This is non-ideal and fails to indicate bottlenecks that appear in real-world scenarios.
* Teams use different benchmark methodologies, so results are not standardized.
* Delays in development and testing due to dependence on testing tool features.
* Resource and effort investment in developing different ad hoc tools and test methodologies.
=== Overview of TRex
TRex addresses these problems through an innovative and extendable software implementation and by leveraging standard and open software and x86/UCS hardware.
* Generates and analyzes L4-7 traffic. In one package, provides capabilities of commercial L7 tools.
* Stateful traffic generator based on pre-processing and smart replay of real traffic templates.
* Generates and *amplifies* both client and server side traffic.
* Customized functionality can be added.
* Scales to 200Gb/sec for one UCS (using Intel 40Gb/sec NICs).
* Low cost.
* Self-contained package that can be easily installed and deployed.
* Virtual interface support enables TRex to be used in a fully virtual environment without physical NICs. Example use cases:
** Amazon AWS
** Cisco LaaS
// Which LaaS is this? Location as a service? Linux?
** TRex on your laptop
.TRex Hardware
[options="header",cols="1^,1^"]
|=================
|Cisco UCS Platform | Intel NIC
| image:images/ucs200_2.png[title="generator"] | image:images/Intel520.png[title="generator"]
|=================
=== Purpose of this guide
This guide explains the use of TRex internals and the use of TRex together with Cisco ASR1000 Series routers. The examples illustrate novel traffic generation techniques made possible by TRex.
== Download and installation
=== Hardware recommendations
TRex operates in a Linux application environment, interacting with Linux kernel modules.
TRex curretly works on x86 architecture and can operate well on Cisco UCS hardware. The following platforms have been tested and are recommended for operating TRex.
[NOTE]
=====================================
A high-end UCS platform is not required for operating TRex in its current version, but may be required for future versions.
=====================================
[NOTE]
=====================================
Not all supported DPDK interfaces are supported by TRex
=====================================
.Preferred UCS hardware
[options="header",cols="1,3"]
|=================
| UCS Type | Comments
| UCS C220 Mx | *Preferred Low-End*. Supports up to 40Gb/sec with 540-D2. With newer Intel NIC (recommended), supports 80Gb/sec with 1RU. See table below describing components.
| UCS C200| Early UCS model.
| UCS C210 Mx | Supports up to 40Gb/sec PCIe3.0.
| UCS C240 Mx | *Preferred, High-End* Supports up to 200Gb/sec. 6x XL710 NICS (PCIex8) or 2xFM10K (PCIex16). See table below describing components.
| UCS C260M2 | Supports up to 30Gb/sec (limited by V2 PCIe).
|=================
.Low-End UCS C220 Mx - Internal components
[options="header",cols="1,2",width="60%"]
|=================
| Components | Details
| CPU | 2x E5-2620 @ 2.0 GHz.
| CPU Configuration | 2-Socket CPU configurations (also works with 1 CPU).
| Memory | 2x4 banks f.or each CPU. Total of 32GB in 8 banks.
| RAID | No RAID.
|=================
.High-End C240 Mx - Internal components
[options="header",cols="1,2",width="60%"]
|=================
| Components | Details
| CPU | 2x E5-2667 @ 3.20 GHz.
| PCIe | 1x Riser PCI expansion card option A PID UCSC-PCI-1A-240M4 enables 2 PCIex16.
| CPU Configuration | 2-Socket CPU configurations (also works with 1 CPU).
| Memory | 2x4 banks for each CPU. Total of 32GB in 8 banks.
| RAID | No RAID.
| Riser 1/2 | both left and right should support x16 PCIe. Right (Riser1) should be from option A x16 and Left (Riser2) should be x16. need to order both
|=================
.Supported NICs
[options="header",cols="1,1,4",width="90%"]
|=================
| Chipset | Bandwidth (Gb/sec) | Example
| Intel I350 | 1 | Intel 4x1GE 350-T4 NIC
| Intel 82599 | 10 | Cisco part ID:N2XX-AIPCI01 Intel x520-D2, Intel X520 Dual Port 10Gb SFP+ Adapter
| Intel 82599 VF | x |
| Intel X710 | 10 | Cisco part ID:UCSC-PCIE-IQ10GF link:https://en.wikipedia.org/wiki/Small_form-factor_pluggable_transceiver[SFP+], *Preferred* support per stream stats in hardware link:http://www.silicom-usa.com/PE310G4i71L_Quad_Port_Fiber_SFP+_10_Gigabit_Ethernet_PCI_Express_Server_Adapter_49[Silicom PE310G4i71L]
| Intel XL710 | 40 | Cisco part ID:UCSC-PCIE-ID40GF, link:https://en.wikipedia.org/wiki/QSFP[QSFP+] (copper/optical)
| Intel XL710/X710 VF | x |
| Intel 82599 VF | x |
| Intel FM10420 | 25/100 | QSFP28, by Silicom link:http://www.silicom-usa.com/100_Gigabit_Dual_Port_Fiber_Ethernet_PCI_Express_PE3100G2DQiR_96[Silicom PE3100G2DQiR_96] (*in development*)
| Mellanox ConnectX-4 | 25/40/50/56/100 | QSFP28, link:http://www.mellanox.com/page/products_dyn?product_family=201&[ConnectX-4] link:http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf[ConnectX-4-brief] (copper/optical) supported from v2.11 more details xref:connectx_support[TRex Support]
| Mellanox ConnectX-5 | 25/40/50/56/100 | Not supported yet
| Cisco 1300 series | 40 | QSFP+, VIC 1380, VIC 1385, VIC 1387 see more xref:ciscovic_support[TRex Support]
| VMXNET / +
VMXNET3 (see notes) | VMware paravirtualized | Connect using VMware vSwitch
| E1000 | paravirtualized | VMware/KVM/VirtualBox
| Virtio | paravirtualized | KVM
|=================
// in table above, is it correct to list "paravirtualized" as chipset? Also, what is QSFP28? It does not appear on the lined URL. Clarify: is Intel X710 the preferred NIC?
.SFP+ support
[options="header",cols="2,1,1,1",width="90%"]
|=================
| link:https://en.wikipedia.org/wiki/Small_form-factor_pluggable_transceiver[SFP+] | Intel Ethernet Converged X710-DAX | Silicom link:http://www.silicom-usa.com/PE310G4i71L_Quad_Port_Fiber_SFP+_10_Gigabit_Ethernet_PCI_Express_Server_Adapter_49[PE310G4i71L] (Open optic) | 82599EB 10-Gigabit
| link:http://www.cisco.com/c/en/us/products/collateral/interfaces-modules/transceiver-modules/data_sheet_c78-455693.html[Cisco SFP-10G-SR] | Does not work | [green]*works* | [green]*works*
| link:http://www.cisco.com/c/en/us/products/collateral/interfaces-modules/transceiver-modules/data_sheet_c78-455693.html[Cisco SFP-10G-LR] | Does not work | [green]*works* | [green]*works*
| link:http://www.cisco.com/c/en/us/products/collateral/interfaces-modules/transceiver-modules/data_sheet_c78-455693.html[Cisco SFP-H10GB-CU1M]| [green]*works* | [green]*works* | [green]*works*
| link:http://www.cisco.com/c/en/us/products/collateral/interfaces-modules/transceiver-modules/data_sheet_c78-455693.html[Cisco SFP-10G-AOC1M] | [green]*works* | [green]*works* | [green]*works*
|=================
[NOTE]
=====================================
Intel X710 NIC (example: FH X710DA4FHBLK) operates *only* with Intel SFP+. For open optic, use the link:http://www.silicom-usa.com/PE310G4i71L_Quad_Port_Fiber_SFP+_10_Gigabit_Ethernet_PCI_Express_Server_Adapter_49[Silicom PE310G4i71L] NIC.
=====================================
// clarify above table and note
.XL710 NIC base QSFP+ support
[options="header",cols="1,1,1",width="90%"]
|=================
| link:https://en.wikipedia.org/wiki/QSFP[QSFP+] | Intel Ethernet Converged XL710-QDAX | Silicom link:http://www.silicom-usa.com/Dual_Port_Fiber_40_Gigabit_Ethernet_PCI_Express_Server_Adapter_PE340G2Qi71_83[PE340G2Qi71] Open optic
| QSFP+ SR4 optics | APPROVED OPTICS [green]*works*, Cisco QSFP-40G-SR4-S does *not* work | Cisco QSFP-40G-SR4-S [green]*works*
| QSFP+ LR-4 Optics | APPROVED OPTICS [green]*works*, Cisco QSFP-40G-LR4-S does *not* work | Cisco QSFP-40G-LR4-S [green]*works*
| QSFP Active Optical Cables (AoC) | Cisco QSFP-H40G-AOC [green]*works* | Cisco QSFP-H40G-AOC [green]*works*
| QSFP+ Intel Ethernet Modular Optics | N/A | N/A
| QSFP+ DA twin-ax cables | N/A | N/A
| Active QSFP+ Copper Cables | Cisco QSFP-4SFP10G-CU [green]*works* | Cisco QSFP-4SFP10G-CU [green]*works*
|=================
[NOTE]
=====================================
For Intel XL710 NICs, Cisco SR4/LR QSFP+ does not operate. Use Silicom with Open Optic.
=====================================
.ConnectX-4 NIC base QSFP28 support (100gb)
[options="header",cols="1,2",width="90%"]
|=================
| link:https://en.wikipedia.org/wiki/QSFP[QSFP28] | ConnectX-4
| QSFP28 SR4 optics | N/A
| QSFP28 LR-4 Optics | N/A
| QSFP28 (AoC) | Cisco QSFP-100G-AOCxM [green]*works*
| QSFP28 DA twin-ax cables | Cisco QSFP-100G-CUxM [green]*works*
|=================
.Cisco VIC NIC base QSFP+ support
[options="header",cols="1,2",width="90%"]
|=================
| link:https://en.wikipedia.org/wiki/QSFP[QSFP+] | Intel Ethernet Converged XL710-QDAX
| QSFP+ SR4 optics | N/A
| QSFP+ LR-4 Optics | N/A
| QSFP Active Optical Cables (AoC) | Cisco QSFP-H40G-AOC [green]*works*
| QSFP+ Intel Ethernet Modular Optics | N/A
| QSFP+ DA twin-ax cables | N/A | N/A
| Active QSFP+ Copper Cables | N/A
|=================
// clarify above table and note. let's discuss.
.FM10K QSFP28 support
[options="header",cols="1,1",width="70%"]
|=================
| QSFP28 | Example
| todo | todo
|=================
// do we want to show "todo"? maybe "pending"
[IMPORTANT]
=====================================
* Intel SFP+ 10Gb/sec is the only one supported by default on the standard Linux driver. TRex also supports Cisco 10Gb/sec SFP+.
// above, replace "only one" with "only mode"?
* For operating high speed throughput (example: several Intel XL710 40Gb/sec), use different link:https://en.wikipedia.org/wiki/Non-uniform_memory_access[NUMA] nodes for different NICs. +
To verify NUMA and NIC topology: `lstopo (yum install hwloc)` +
To display CPU info, including NUMA node: `lscpu` +
NUMA usage xref:numa-example[example]
* For Intel XL710 NICs, verify that the NVM is v5.04 . xref:xl710-firmware[Info].
** `> sudo ./t-rex-64 -f cap2/dns.yaml -d 0 *-v 6* --nc | grep NVM` +
`PMD: FW 5.0 API 1.5 NVM 05.00.04 eetrack 800013fc`
=====================================
// above, maybe rename the bullet points "NIC usage notes"? should we create a subsection for NICs? Maybe it would be under "2.1 Hardware recommendations" as a subsection.
.Sample order for recommended low-end Cisco UCSC-C220-M3S with 4x10Gb ports
[options="header",cols="1,1",width="70%"]
|=================
| Component | Quantity
| UCSC-C220-M3S | 1
| UCS-CPU-E5-2650 | 2
| UCS-MR-1X041RY-A | 8
| A03-D500GC3 | 1
| N2XX-AIPCI01 | 2
| UCSC-PSU-650W | 1
| SFS-250V-10A-IS | 1
| UCSC-CMA1 | 1
| UCSC-HS-C220M3 | 2
| N20-BBLKD | 7
| UCSC-PSU-BLKP | 1
| UCSC-RAIL1 | 1
|=================
// should table above say "low-end Cisco UCS C220 M3S" instead of "low-end USCS-C220-M3S"?
NOTE: Purchase the 10Gb/sec SFP+ separately. Cisco would be fine with TRex (but not for plain Linux driver).
// does note above mean "TRex operates with 10Gb/sec SFP+ components, but plain Linux does not provide drivers."? if so, how does purchasing separately solve this? where do they get drivers?
=== Installing OS
==== Supported versions
Supported Linux versions:
* Fedora 20-23, 64-bit kernel (not 32-bit)
* Ubuntu 14.04.1 LTS, 64-bit kernel (not 32-bit)
* Ubuntu 16.xx LTS, 64-bit kernel (not 32-bit) -- not fully supported
* CentOs/RedHat 7.2 LTS, 64-bit kernel (not 32-bit) -- The only working option for ConnectX-4
NOTE: Additional OS version may be supported by compiling the necessary drivers.
To check whether a kernel is 64-bit, verify that the ouput of the following command is `x86_64`.
[source,bash]
----
$uname -m
x86_64
----
==== Download Linux
ISO images for supported Linux releases can be downloaded from:
.Supported Linux ISO image links
[options="header",cols="1^,2^",width="50%"]
|======================================
| Distribution | SHA256 Checksum
| link:http://archives.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/iso/Fedora-20-x86_64-DVD.iso[Fedora 20]
| link:http://archives.fedoraproject.org/pub/archive/fedora/linux/releases/20/Fedora/x86_64/iso/Fedora-20-x86_64-CHECKSUM[Fedora 20 CHECKSUM]
| link:http://fedora-mirror01.rbc.ru/pub/fedora/linux/releases/21/Server/x86_64/iso/Fedora-Server-DVD-x86_64-21.iso[Fedora 21]
| link:http://fedora-mirror01.rbc.ru/pub/fedora/linux/releases/21/Server/x86_64/iso/Fedora-Server-21-x86_64-CHECKSUM[Fedora 21 CHECKSUM]
| link:http://old-releases.ubuntu.com/releases/14.04.1/ubuntu-14.04-desktop-amd64.iso[Ubuntu 14.04.1]
| http://old-releases.ubuntu.com/releases/14.04.1/SHA256SUMS[Ubuntu 14.04* CHECKSUMs]
| link:http://releases.ubuntu.com/16.04.1/ubuntu-16.04.1-server-amd64.iso[Ubuntu 16.04.1]
| http://releases.ubuntu.com/16.04.1/SHA256SUMS[Ubuntu 16.04* CHECKSUMs]
|======================================
For Fedora downloads...
* Select a mirror close to your location: +
https://admin.fedoraproject.org/mirrormanager/mirrors/Fedora +
Choose: "Fedora Linux http" -> releases -> <version number> -> Server -> x86_64 -> iso -> Fedora-Server-DVD-x86_64-<version number>.iso
* Verify the checksum of the downloaded file matches the linked checksum values with the `sha256sum` command. Example:
[source,bash]
----
$sha256sum Fedora-18-x86_64-DVD.iso
91c5f0aca391acf76a047e284144f90d66d3d5f5dcd26b01f368a43236832c03 #<1>
----
<1> Should be equal to the link:https://en.wikipedia.org/wiki/SHA-2[SHA-256] values described in the linked checksum files.
==== Install Linux
Ask your lab admin to install the Linux using CIMC, assign an IP, and set the DNS. Request the sudo or super user password to enable you to ping and SSH.
xref:fedora21_example[Example of installing Fedora 21 Server]
[NOTE]
=====================================
* To use TRex, you should have sudo on the machine or the root password.
* Upgrading the linux Kernel using `yum upgrade` requires building the TRex drivers.
* In Ubuntu 16, auto-updater is enabled by default. It's advised to turn it off as with update of Kernel need to compile again the DPDK .ko file. +
Command to remove it: +
> sudo apt-get remove unattended-upgrades
=====================================
==== Verify Intel NIC installation
Use `lspci` to verify the NIC installation.
Example 4x 10Gb/sec TRex configuration (see output below):
* I350 management port
* 4x Intel Ethernet Converged Network Adapter model x520-D2 (82599 chipset)
[source,bash]
----
$[root@trex]lspci | grep Ethernet
01:00.0 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) #<1>
01:00.1 Ethernet controller: Intel Corporation I350 Gigabit Network Connection (rev 01) #<2>
03:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01) #<3>
03:00.1 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
82:00.0 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
82:00.1 Ethernet controller: Intel Corporation 82599EB 10-Gigabit SFI/SFP+ Network Connection (rev 01)
----
<1> Management port
<2> CIMC port
<3> 10Gb/sec traffic ports (Intel 82599EB)
=== Obtaining the TRex package
Connect using `ssh` to the TRex machine and execute the commands described below.
NOTE: Prerequisite: *$WEB_URL* is *{web_server_url}* or *{local_web_server_url}* (Cisco internal)
Latest release:
[source,bash]
----
$mkdir trex
$cd trex
$wget --no-cache $WEB_URL/release/latest
$tar -xzvf latest
----
Bleeding edge version:
[source,bash]
----
$wget --no-cache $WEB_URL/release/be_latest
----
To obtain a specific version, do the following:
[source,bash]
----
$wget --no-cache $WEB_URL/release/vX.XX.tar.gz #<1>
----
<1> X.XX = Version number
== First time Running
=== Configuring for loopback
Before connecting TRex to your DUT, it is strongly advised to verify that TRex and the NICs work correctly in loopback. +
To get best performance, it is advised to loopback interfaces on the same NUMA (controlled by the same physical processor). If you do not know how to check this, you can ignore this advice for now. +
[NOTE]
=====================================================================
If you are using 10Gbs NIC based on Intel 520-D2 NICs, and you loopback ports on the same NIC, using SFP+, it might not sync, and you will fail to get link up. +
We checked many types of SFP+ (Intel/Cisco/SR/LR) and it worked for us. +
If you still encounter link issues, you can either try to loopback interfaces from different NICs, or use link:http://www.fiberopticshare.com/tag/cisco-10g-twinax[Cisco twinax copper cable].
=====================================================================
.Loopback example
image:images/loopback_example.png[title="Loopback example"]
==== Identify the ports
[source,bash]
----
$>sudo ./dpdk_setup_ports.py -s
Network devices using DPDK-compatible driver
============================================
Network devices using kernel driver
===================================
0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb #<1>
0000:03:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
0000:13:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
0000:13:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
0000:02:00.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio *Active* #<2>
Other network devices
=====================
<none>
----
<1> If you did not run any DPDK application, you will see list of interfaces binded to the kernel, or not binded at all.
<2> Interface marked as 'active' is the one used by your ssh connection. *Never* put it in TRex config file.
Choose ports to use and follow the instructions in the next section to create configuration file.
==== Creating minimum configuration file
Default configuration file name is: `/etc/trex_cfg.yaml`.
You can copy basic configuration file from cfg folder
[source,bash]
----
$cp cfg/simple_cfg.yaml /etc/trex_cfg.yaml
----
Then, edit the configuration file and put your interface's and IP addresses details.
Example:
[source,bash]
----
<none>
- port_limit : 2
version : 2
#List of interfaces. Change to suit your setup. Use ./dpdk_setup_ports.py -s to see available options
interfaces : ["03:00.0", "03:00.1"] #<1>
port_info : # Port IPs. Change to suit your needs. In case of loopback, you can leave as is.
- ip : 1.1.1.1
default_gw : 2.2.2.2
- ip : 2.2.2.2
default_gw : 1.1.1.1
----
<1> You need to edit this line to match the interfaces you are using.
Notice that all NICs you are using should have the same type. You cannot mix different NIC types in one config file. For more info, see link:http://trex-tgn.cisco.com/youtrack/issue/trex-201[trex-201].
You can find xref:trex_config[here] full list of configuration file options.
=== Script for creating config file
To help starting with basic configuration file that suits your needs, there a script that can automate this process.
The script helps you getting started, and you can then edit the file and add advanced options from xref:trex_config[here]
if needed. +
There are two ways to run the script. Interactively (script will pormpt you for parameters), or providing all parameters
using command line options.
==== Interactive mode
[source,bash]
----
sudo ./dpdk_setup_ports.py -i
----
You will see a list of available interfaces with their related information +
Just follow the instructions to get basic config file.
==== Specifying input arguments using command line options
First, run this command to see the list of all interfaces and their related information:
[source,bash]
----
sudo ./dpdk_setup_ports.py -t
----
* In case of *Loopback* and/or only *L1-L2 Switches* on the way, you do not need to provide IPs or destination MACs. +
The script Will assume the following interface connections: 0↔1, 2↔3 etc. +
Just run:
[source,bash]
----
sudo ./dpdk_setup_ports.py -c <TRex interface 0> <TRex interface 1> ...
----
* In case of *Router* (or other next hop device, such as *L3 Switch*), you should specify the TRex IPs and default gateways, or
MACs of the router as described below.
.Additional arguments to creating script (dpdk_setup_ports.py -c)
[options="header",cols="2,5,3",width="100%"]
|=================
| Arg | Description | Example
| -c | Create a configuration file by specified interfaces (PCI address or Linux names: eth1 etc.) | -c 03:00.1 eth1 eth4 84:00.0
| --dump | Dump created config to screen. |
| -o | Output the config to this file. | -o /etc/trex_cfg.yaml
| --dest-macs | Destination MACs to be used per each interface. Specify this option if you want MAC based config instead of IP based one. You must not set it together with --ip and --def_gw | --dest-macs 11:11:11:11:11:11 22:22:22:22:22:22
| --ip | List of IPs to use for each interface. If this option and --dest-macs is not specified, script assumes loopback connections (0↔1, 2↔3 etc.) | --ip 1.2.3.4 5.6.7.8
|--def-gw | List of default gateways to use for each interface. If --ip given, you must provide --def_gw as well | --def-gw 3.4.5.6 7.8.9.10
| --ci | Cores include: White list of cores to use. Make sure there is enough for each NUMA. | --ci 0 2 4 5 6
| --ce | Cores exclude: Black list of cores to exclude. Make sure there will be enough for each NUMA. | --ci 10 11 12
| --no-ht | No HyperThreading: Use only one thread of each Core in created config yaml. |
| --prefix | Advanced option: prefix to be used in TRex config in case of parallel instances. | --prefix first_instance
| --zmq-pub-port | Advanced option: ZMQ Publisher port to be used in TRex config in case of parallel instances. | --zmq-pub-port 4000
| --zmq-rpc-port | Advanced option: ZMQ RPC port to be used in TRex config in case of parallel instances. | --zmq-rpc-port
| --ignore-numa | Advanced option: Ignore NUMAs for config creation. Use this option only if you have to, as it might reduce performance. For example, if you have pair of interfaces at different NUMAs |
|=================
=== Configuring ESXi for running TRex
To get best performance, it is advised to run TRex on bare metal hardware, and not use any kind of VM.
Bandwidth on VM might be limited, and IPv6 might not be fully supported.
Having said that, there are sometimes benefits for running on VM. +
These include: +
* Virtual NICs can be used to bridge between TRex and NICs not supported by TRex. +
* If you already have VM installed, and do not require high performance. +
1. Click the host machine, enter Configuration -> Networking.
a. One of the NICs should be connected to the main vSwitch network to get an "outside" connection, for the TRex client and ssh: +
image:images/vSwitch_main.png[title="vSwitch_main"]
b. Other NICs that are used for TRex traffic should be in distinguish vSwitch: +
image:images/vSwitch_loopback.png[title="vSwitch_loopback"]
2. Right-click guest machine -> Edit settings -> Ensure the NICs are set to their networks: +
image:images/vSwitch_networks.png[title="vSwitch_networks"]
[NOTE]
=====================================================================
Before version 2.10, the following command did not function as expected:
[subs="quotes"]
....
sudo ./t-rex-64 -f cap2/dns.yaml *--lm 1 --lo* -l 1000 -d 100
....
The vSwitch did not "know" where to route the packet. Was solved on version 2.10 when TRex started to support ARP.
=====================================================================
* Pass-through is the way to use directly the NICs from host machine inside the VM. Has no limitations except the NIC/hardware itself. The only difference via bare-metal OS is occasional spikes of latency (~10ms). Passthrough settings cannot be saved to OVA.
1. Click on the host machine. Enter Configuration -> Advanced settings -> Edit. Mark the desired NICs. Reboot the ESXi to apply. +
image:images/passthrough_marking.png[title="passthrough_marking"]
2. Right click on guest machine. Edit settings -> Add -> *PCI device* -> Choose the NICs one by one. +
image:images/passthrough_adding.png[title="passthrough_adding"]
=== Configuring for running with router (or other L3 device) as DUT
You can follow link:trex_config_guide.html[this] presentation for an example of how to configure router as DUT.
=== Running TRex
When all is set, use the following command to start basic TRex run for 10 seconds
(it will use the default config file name /etc/trex_cfg.yaml):
[source,bash]
----
$sudo ./t-rex-64 -f cap2/dns.yaml -c 4 -m 1 -d 10 -l 1000
----
If successful, the output will be similar to the following:
[source,python]
----
$ sudo ./t-rex-64 -f cap2/dns.yaml -d 10 -l 1000
Starting TRex 2.09 please wait ...
zmq publisher at: tcp://*:4500
number of ports found : 4
port : 0
------------
link : link : Link Up - speed 10000 Mbps - full-duplex <1>
promiscuous : 0
port : 1
------------
link : link : Link Up - speed 10000 Mbps - full-duplex
promiscuous : 0
port : 2
------------
link : link : Link Up - speed 10000 Mbps - full-duplex
promiscuous : 0
port : 3
------------
link : link : Link Up - speed 10000 Mbps - full-duplex
promiscuous : 0
-Per port stats table
ports | 0 | 1 | 2 | 3
-------------------------------------------------------------------------------------
opackets | 1003 | 1003 | 1002 | 1002
obytes | 66213 | 66229 | 66132 | 66132
ipackets | 1003 | 1003 | 1002 | 1002
ibytes | 66225 | 66209 | 66132 | 66132
ierrors | 0 | 0 | 0 | 0
oerrors | 0 | 0 | 0 | 0
Tx Bw | 217.09 Kbps | 217.14 Kbps | 216.83 Kbps | 216.83 Kbps
-Global stats enabled
Cpu Utilization : 0.0 % <2> 29.7 Gb/core <3>
Platform_factor : 1.0
Total-Tx : 867.89 Kbps <4>
Total-Rx : 867.86 Kbps <5>
Total-PPS : 1.64 Kpps
Total-CPS : 0.50 cps
Expected-PPS : 2.00 pps <6>
Expected-CPS : 1.00 cps <7>
Expected-BPS : 1.36 Kbps <8>
Active-flows : 0 <9> Clients : 510 Socket-util : 0.0000 %
Open-flows : 1 <10> Servers : 254 Socket : 1 Socket/Clients : 0.0
drop-rate : 0.00 bps <11>
current time : 5.3 sec
test duration : 94.7 sec
-Latency stats enabled
Cpu Utilization : 0.2 % <12>
if| tx_ok , rx_ok , rx ,error, average , max , Jitter , max window
| , , check, , latency(usec),latency (usec) ,(usec) ,
--------------------------------------------------------------------------------------------------
0 | 1002, 1002, 0, 0, 51 , 69, 0 | 0 69 67 <13>
1 | 1002, 1002, 0, 0, 53 , 196, 0 | 0 196 53
2 | 1002, 1002, 0, 0, 54 , 71, 0 | 0 71 69
3 | 1002, 1002, 0, 0, 53 , 193, 0 | 0 193 52
----
<1> Link must be up for TRex to work.
<2> Average CPU utilization of transmitters threads. For best results it should be lower than 80%.
<3> Gb/sec generated per core of DP. Higher is better.
<4> Total Tx must be the same as Rx at the end of the run
<5> Total Rx must be the same as Tx at the end of the run
<6> Expected number of packets per second (calculated without latency packets).
<7> Expected number of connections per second (calculated without latency packets).
<8> Expected number of bits per second (calculated without latency packets).
<9> Number of TRex active "flows". Could be different than the number of router flows, due to aging issues. Usualy the TRex number of active flows is much lower than that of the router because the router ages flows slower.
<10> Total number of TRex flows opened since startup (including active ones, and ones already closed).
<11> Drop rate.
<12> Rx and latency thread CPU utilization.
<13> Tx_ok on port 0 should equal Rx_ok on port 1, and vice versa.
More statistics information:
*socket*:: Same as the active flows.
*Socket/Clients*:: Average of active flows per client, calculated as active_flows/#clients.
*Socket-util*:: Estimation of number of L4 ports (sockets) used per client IP. This is approximately (100*active_flows/#clients)/64K, calculated as (average active flows per client*100/64K). Utilization of more than 50% means that TRex is generating too many flows per single client, and that more clients must be added in the generator config.
// clarify above, especially the formula
*Max window*:: Momentary maximum latency for a time window of 500 msec. There are few numbers shown per port.
The newest number (last 500msec) is on the right. Oldest on the left. This can help identifying spikes of high latency clearing after some time. Maximum latency is the total maximum over the entire test duration. To best understand this,
run TRex with latency option (-l) and watch the results with this section in mind.
*Platform_factor*:: There are cases in which we duplicate the traffic using splitter/switch and we would like all numbers displayed by TRex to be multiplied by this factor, so that TRex counters will match the DUT counters.
WARNING: If you don't see rx packets, revisit your MAC address configuration.
include::trex_book_basic.asciidoc[]
== Advanced features
=== VLAN (dot1q) support
If you want VLAN tag to be added to all traffic generated by TRex, you can acheive that by adding ``vlan'' keyword in each
port section in the platform config file, like described xref:trex_config[here]. +
You can specify different VLAN tag for each port, or even use VLAN only on some of the ports. +
One useful application of this can be in a lab setup where you have one TRex and many DUTs, and you want to test different
DUT on each run, without changing cable connections. You can put each DUT on a VLAN of its own, and use different TRex
platform config files with different VLANs on each run.
=== Utilizing maximum port bandwidth in case of asymmetric traffic profile
anchor:trex_load_bal[]
[NOTE]
If you want simple VLAN support, this is probably *not* the feature you want. This is used for load balancing.
If you want VLAN support, please look at ``vlan'' field xref:trex_config[here].
The VLAN Trunk TRex feature attempts to solve the router port bandwidth limitation when the traffic profile is asymmetric. Example: Asymmetric SFR profile.
This feature converts asymmetric traffic to symmetric, from the port perspective, using router sub-interfaces.
This requires TRex to send the traffic on two VLANs, as described below.
.YAML format - This goes into traffic yaml file
[source,python]
----
vlan : { enable : 1 , vlan0 : 100 , vlan1 : 200 }
----
.Example
[source,python]
----
- duration : 0.1
vlan : { enable : 1 , vlan0 : 100 , vlan1 : 200 } <1>
----
<1> Enable load balance feature, vlan0==100 , vlan1==200
For a full file example please look in TRex source at scripts/cap2/ipv4_load_balance.yaml
*Problem definition:*::
Scenario: TRex with two ports and an SFR traffic profile.
.Without VLAN/sub interfaces, all client emulated traffic is sent on port 0, and all server emulated traffic (HTTP response for example) on port 1.
[source,python]
----
TRex port 0 ( client) <-> [ DUT ] <-> TRex port 1 ( server)
----
Without VLAN support the traffic is asymmetric. 10% of the traffic is sent from port 0 (client side), 90% is from port 1 (server). Port 1 is the bottlneck (10Gb/s limit).
.With VLAN/sub interfaces
[source,python]
----
TRex port 0 ( client VLAN0) <-> | DUT | <-> TRex port 1 ( server-VLAN0)
TRex port 0 ( server VLAN1) <-> | DUT | <-> TRex port 1 ( client-VLAN1)
----
In this case, traffic on vlan0 is sent as before, while for traffic on vlan1, order is reversed (client traffic sent on port1 and server traffic on port0).
TRex divids the flows evenly between the vlans. This results an equal amount of traffic on each port.
*Router configuation:*::
[source,python]
----
!
interface TenGigabitEthernet1/0/0 <1>
mac-address 0000.0001.0000
mtu 4000
no ip address
load-interval 30
!
i
interface TenGigabitEthernet1/0/0.100
encapsulation dot1Q 100 <2>
ip address 11.77.11.1 255.255.255.0
ip nbar protocol-discovery
ip policy route-map vlan_100_p1_to_p2 <3>
!
interface TenGigabitEthernet1/0/0.200
encapsulation dot1Q 200 <4>
ip address 11.88.11.1 255.255.255.0
ip nbar protocol-discovery
ip policy route-map vlan_200_p1_to_p2 <5>
!
interface TenGigabitEthernet1/1/0
mac-address 0000.0001.0000
mtu 4000
no ip address
load-interval 30
!
interface TenGigabitEthernet1/1/0.100
encapsulation dot1Q 100
ip address 22.77.11.1 255.255.255.0
ip nbar protocol-discovery
ip policy route-map vlan_100_p2_to_p1
!
interface TenGigabitEthernet1/1/0.200
encapsulation dot1Q 200
ip address 22.88.11.1 255.255.255.0
ip nbar protocol-discovery
ip policy route-map vlan_200_p2_to_p1
!
arp 11.77.11.12 0000.0001.0000 ARPA <6>
arp 22.77.11.12 0000.0001.0000 ARPA
route-map vlan_100_p1_to_p2 permit 10 <7>
set ip next-hop 22.77.11.12
!
route-map vlan_100_p2_to_p1 permit 10
set ip next-hop 11.77.11.12
!
route-map vlan_200_p1_to_p2 permit 10
set ip next-hop 22.88.11.12
!
route-map vlan_200_p2_to_p1 permit 10
set ip next-hop 11.88.11.12
!
----
<1> Main interface must not have IP address.
<2> Enable VLAN1
<3> PBR configuration
<4> Enable VLAN2
<5> PBR configuration
<6> TRex destination port MAC address
<7> PBR configuration rules
=== Static source MAC address setting
With this feature, TRex replaces the source MAC address with the client IP address.
Note: This feature was requested by the Cisco ISG group.
*YAML:*::
[source,python]
----
mac_override_by_ip : true
----
.Example
[source,python]
----
- duration : 0.1
..
mac_override_by_ip : true <1>
----
<1> In this case, the client side MAC address looks like this:
SRC_MAC = IPV4(IP) + 00:00
=== IPv6 support
Support for IPv6 includes:
1. Support for pcap files containing IPv6 packets
2. Ability to generate IPv6 traffic from pcap files containing IPv4 packets
The following command line option enables this feature: `--ipv6`
The keywords (`src_ipv6` and `dst_ipv6`) specify the most significant 96 bits of the IPv6 address - for example:
[source,python]
----
src_ipv6 : [0xFE80,0x0232,0x1002,0x0051,0x0000,0x0000]
dst_ipv6 : [0x2001,0x0DB8,0x0003,0x0004,0x0000,0x0000]
----
The IPv6 address is formed by placing what would typically be the IPv4
address into the least significant 32 bits and copying the value provided
in the src_ipv6/dst_ipv6 keywords into the most signficant 96 bits.
If src_ipv6 and dst_ipv6 are not specified, the default
is to form IPv4-compatible addresses (most signifcant 96 bits are zero).
There is support for all plugins.
*Example:*::
[source,bash]
----
$sudo ./t-rex-64 -f cap2l/sfr_delay_10_1g.yaml -c 4 -p -l 100 -d 100000 -m 30 --ipv6
----
*Limitations:*::
* TRex cannot generate both IPv4 and IPv6 traffic.
* The `--ipv6` switch must be specified even when using pcap file containing only IPv6 packets.
*Router configuration:*::
[source,python]
----
interface TenGigabitEthernet1/0/0
mac-address 0000.0001.0000
mtu 4000
ip address 11.11.11.11 255.255.255.0
ip policy route-map p1_to_p2
load-interval 30
ipv6 enable ==> IPv6
ipv6 address 2001:DB8:1111:2222::1/64 <1>
ipv6 policy route-map ipv6_p1_to_p2 <2>
!
ipv6 unicast-routing <3>
ipv6 neighbor 3001::2 TenGigabitEthernet0/1/0 0000.0002.0002 <4>
ipv6 neighbor 2001::2 TenGigabitEthernet0/0/0 0000.0003.0002
route-map ipv6_p1_to_p2 permit 10 <5>
set ipv6 next-hop 2001::2
!
route-map ipv6_p2_to_p1 permit 10
set ipv6 next-hop 3001::2
!
asr1k(config)#ipv6 route 4000::/64 2001::2
asr1k(config)#ipv6 route 5000::/64 3001::2
----
<1> Enable IPv6
<2> Add pbr
<3> Enable IPv6 routing
<4> MAC address setting. Should be TRex MAC.
<5> PBR configuraion
=== Client clustering configuration
TRex supports testing complex topologies, with more than one DUT, using a feature called "client clustering".
This feature allows specifying the distribution of clients TRex emulates.
Let's look at the following topology:
.Topology Example
image:images/topology.png[title="Client Clustering",width=850]
We have two clusters of DUTs.
Using config file, you can partition TRex emulated clients to groups, and define
how they will be spread between the DUT clusters.
Group configuration includes:
* IP start range.
* IP end range.
* Initiator side configuration. - These are the parameters affecting packets sent from client side.
* Responder side configuration. - These are the parameters affecting packets sent from server side.
[NOTE]
It is important to understand that this is *complimentary* to the client generator
configured per profile - it only defines how the clients will be spread between clusters.
Let's look at an example.
We have a profile defining client generator.
[source,bash]
----
$cat cap2/dns.yaml
- duration : 10.0
generator :
distribution : "seq"
clients_start : "16.0.0.1"
clients_end : "16.0.0.255"
servers_start : "48.0.0.1"
servers_end : "48.0.0.255"
dual_port_mask : "1.0.0.0"
cap_info :
- name: cap2/dns.pcap
cps : 1.0
ipg : 10000
rtt : 10000
w : 1
----
We want to create two clusters with 4 and 3 devices respectively.
We also want to send *80%* of the traffic to the upper cluster and *20%* to the lower cluster.
We can specify to which DUT the packet will be sent by MAC address or IP. We will present a MAC
based example, and then see how to change to be IP based.
We will create the following cluster configuration file.
[source,bash]
----
#
# Client configuration example file
# The file must contain the following fields
#
# 'vlan' - if the entire configuration uses VLAN,
# each client group must include vlan
# configuration
#
# 'groups' - each client group must contain range of IPs
# and initiator and responder section
# 'count' represents the number of different DUTs
# in the group.
#
# 'true' means each group must contain VLAN configuration. 'false' means no VLAN config allowed.
vlan: true
groups:
- ip_start : 16.0.0.1
ip_end : 16.0.0.204
initiator :
vlan : 100
dst_mac : "00:00:00:01:00:00"
responder :
vlan : 200
dst_mac : "00:00:00:02:00:00"
count : 4
- ip_start : 16.0.0.205
ip_end : 16.0.0.255
initiator :
vlan : 101
dst_mac : "00:00:01:00:00:00"
responder:
vlan : 201
dst_mac : "00:00:02:00:00:00"
count : 3
----
The above configuration will divide the generator range of 255 clients to two clusters. The range
of IPs in all groups in the client config file together, must cover the entire range of client IPs
from the traffic profile file.
MACs will be allocated incrementally, with a wrap around after ``count'' addresses.
e.g.
*Initiator side: (packets with source in 16.x.x.x net)*
* 16.0.0.1 -> 48.x.x.x - dst_mac: 00:00:00:01:00:00 vlan: 100
* 16.0.0.2 -> 48.x.x.x - dst_mac: 00:00:00:01:00:01 vlan: 100
* 16.0.0.3 -> 48.x.x.x - dst_mac: 00:00:00:01:00:02 vlan: 100
* 16.0.0.4 -> 48.x.x.x - dst_mac: 00:00:00:01:00:03 vlan: 100
* 16.0.0.5 -> 48.x.x.x - dst_mac: 00:00:00:01:00:00 vlan: 100
* 16.0.0.6 -> 48.x.x.x - dst_mac: 00:00:00:01:00:01 vlan: 100
*responder side: (packets with source in 48.x.x.x net)*
* 48.x.x.x -> 16.0.0.1 - dst_mac(from responder) : "00:00:00:02:00:00" , vlan:200
* 48.x.x.x -> 16.0.0.2 - dst_mac(from responder) : "00:00:00:02:00:01" , vlan:200
and so on. +
+
This means that the MAC addresses of DUTs must be changed to be sequential. Other option is to
specify instead of ``dst_mac'', ip address, using ``next_hop''. +
For example, config file first group will look like:
[source,bash]
----
- ip_start : 16.0.0.1
ip_end : 16.0.0.204
initiator :
vlan : 100
next_hop : 1.1.1.1
src_ip : 1.1.1.100
responder :
vlan : 200
next_hop : 2.2.2.1
src_ip : 2.2.2.100
count : 4
----
In this case, TRex will try to resolve using ARP requests the addresses
1.1.1.1, 1.1.1.2, 1.1.1.3, 1.1.1.4 (and the range 2.2.2.1-2.2.2.4). If not all IPs are resolved,
TRex will exit with an error message. ``src_ip'' will be used for sending gratitues ARP, and
for filling relevant fields in ARP request. If no ``src_ip'' given, TRex will look for source
IP in the relevant port section in the platform config file (/etc/trex_cfg.yaml). If none is found, TRex
will exit with an error message. +
If client config file is given, the ``dest_mac'' and ``default_gw'' parameters from the platform config
file are ignored.
Now, streams will look like: +
*Initiator side: (packets with source in 16.x.x.x net)*
* 16.0.0.1 -> 48.x.x.x - dst_mac: MAC of 1.1.1.1 vlan: 100
* 16.0.0.2 -> 48.x.x.x - dst_mac: MAC of 1.1.1.2 vlan: 100
* 16.0.0.3 -> 48.x.x.x - dst_mac: MAC of 1.1.1.3 vlan: 100
* 16.0.0.4 -> 48.x.x.x - dst_mac: MAC of 1.1.1.4 vlan: 100
* 16.0.0.5 -> 48.x.x.x - dst_mac: MAC of 1.1.1.1 vlan: 100
* 16.0.0.6 -> 48.x.x.x - dst_mac: MAC of 1.1.1.2 vlan: 100
*responder side: (packets with source in 48.x.x.x net)*
* 48.x.x.x -> 16.0.0.1 - dst_mac: MAC of 2.2.2.1 , vlan:200
* 48.x.x.x -> 16.0.0.2 - dst_mac: MAC of 2.2.2.2 , vlan:200
[NOTE]
It is important to understand that the ip to MAC coupling (both with MAC based config or IP based)
is done at the beginning and never changes. Meaning, for example, for the MAC case, packets
with source IP 16.0.0.2 will always have VLAN 100 and dst MAC 00:00:00:01:00:01.
Packets with destination IP 16.0.0.2 will always have VLAN 200 and dst MAC "00:00:00:02:00:01.
This way, you can predict exactly which packet (and how many packets) will go to each DUT.
*Usage:*
[source,bash]
----
sudo ./t-rex-64 -f cap2/dns.yaml --client_cfg my_cfg.yaml
----
=== NAT support
TRex can learn dynamic NAT/PAT translation. To enable this feature add `--learn-mode <mode>` to the command line.
To learn the NAT translation, TRex must embed information describing the flow a packet belongs to, in the first
packet of each flow. This can be done in different methods, depending on the chosen <mode>.
*mode 1:*::
In case of TCP flow, flow info is embedded in the ACK of the first TCP SYN. +
In case of UDP flow, flow info is embedded in the IP identification field of the first packet in the flow. +
This mode was developed for testing NAT with firewalls (which usually do not work with mode 2).
In this mode, TRex also learn and compensate for TCP sequence number randomization that might be done by the DUT.
TRex can learn and compensate for seq num randomization in both directions of the connection.
*mode 2:*::
Flow info is added in a special IPv4 option header (8 bytes long 0x10 id). The option is added only to the first packet in the flow.
This mode does not work with DUTs that drop packets with IP options (for example, Cisco ASA firewall).
*mode 3:*::
This is like mode 1, with the only change being that TRex does not learn the seq num randomization in the server->client direction.
This mode can give much better connections per second performance than mode 1 (still, for all existing firewalls, mode 1 cps rate is more than enough).
==== Examples
*simple HTTP traffic*
[source,bash]
----
$sudo ./t-rex-64 -f cap2/http_simple.yaml -c 4 -l 1000 -d 100000 -m 30 --learn-mode 1
----
*SFR traffic without bundling/ALG support*
[source,bash]
----
$sudo ./t-rex-64 -f avl/sfr_delay_10_1g_no_bundling.yaml -c 4 -l 1000 -d 100000 -m 10 --learn-mode 2
----
*NAT terminal counters:*::
[source,python]
----
-Global stats enabled
Cpu Utilization : 0.6 % 33.4 Gb/core
Platform_factor : 1.0
Total-Tx : 3.77 Gbps NAT time out : 917 <1> (0 in wait for syn+ack) <5>
Total-Rx : 3.77 Gbps NAT aged flow id: 0 <2>
Total-PPS : 505.72 Kpps Total NAT active: 163 <3> (12 waiting for syn) <6>
Total-CPS : 13.43 Kcps Total NAT opened: 82677 <4>
----
<1> Number of connections for which TRex had to send the next packet in the flow, but did not learn the NAT translation yet. Should be 0. Usually, value different than 0 is seen if the DUT drops the flow (probably because it can't handle the number of connections)
<2> Number of flows for which when we got the translation info, flow was aged out already. Non 0 value here should be very rare. Can occur only when there is huge latency in the DUT input/output queue.
<3> Number of flows for which we sent the first packet, but did not learn the NAT translation yet. Value seen depends on the connection per second rate and round trip time.
<4> Total number of translations over the lifetime of the TRex instance. May be different from the total number of flows if template is uni-directional (and consequently does not need translation).
<5> Out of the timed out flows, how many were timed out while waiting to learn the TCP seq num randomization of the server->client from the SYN+ACK packet (Seen only in --learn-mode 1)
<6> Out of the active NAT sessions, how many are waiting to learn the client->server translation from the SYN packet (others are waiting for SYN+ACK from server) (Seen only in --learn-mode 1)
*Configuration for Cisco ASR1000 Series:*::
This feature was tested with the following configuration and sfr_delay_10_1g_no_bundling. yaml traffic profile.
Client address range is 16.0.0.1 to 16.0.0.255
[source,python]
----
interface TenGigabitEthernet1/0/0 <1>
mac-address 0000.0001.0000
mtu 4000
ip address 11.11.11.11 255.255.255.0
ip policy route-map p1_to_p2
ip nat inside <2>
load-interval 30
!
interface TenGigabitEthernet1/1/0
mac-address 0000.0001.0000
mtu 4000
ip address 11.11.11.11 255.255.255.0
ip policy route-map p1_to_p2
ip nat outside <3>
load-interval 30
ip nat pool my 200.0.0.0 200.0.0.255 netmask 255.255.255.0 <4>
ip nat inside source list 7 pool my overload
access-list 7 permit 16.0.0.0 0.0.0.255 <5>
ip nat inside source list 8 pool my overload <6>
access-list 8 permit 17.0.0.0 0.0.0.255
----
<1> Must be connected to TRex Client port (router inside port)
<2> NAT inside
<3> NAT outside
<4> Pool of outside address with overload
<5> Match TRex YAML client range
<6> In case of dual port TRex
// verify 1 and 5 above; rephrased
*Limitations:*::
. The IPv6-IPv6 NAT feature does not exist on routers, so this feature can work only with IPv4.
. Does not support NAT64.
. Bundling/plugin is not fully supported. Consequently, sfr_delay_10.yaml does not work. Use sfr_delay_10_no_bundling.yaml instead.
[NOTE]
=====================================================================
* `--learn-verify` is a TRex debug mechanism for testing the TRex learn mechanism.
* Need to run it when DUT is configured without NAT. It will verify that the inside_ip==outside_ip and inside_port==outside_port.
=====================================================================
=== Flow order/latency verification
In normal mode (without this feature enabled), received traffic is not checked by software. Hardware (Intel NIC) testing for dropped packets occurs at the end of the test. The only exception is the Latency/Jitter packets.
This is one reason that with TRex, you *cannot* check features that terminate traffic (for example TCP Proxy).
To enable this feature, add `--rx-check <sample>` to the command line options, where <sample> is the sample rate.
The number of flows that will be sent to the software for verification is (1/(sample_rate). For 40Gb/sec traffic you can use a sample rate of 1/128. Watch for Rx CPU% utilization.
[NOTE]
============
This feature changes the TTL of the sampled flows to 255 and expects to receive packets with TTL 254 or 255 (one routing hop). If you have more than one hop in your setup, use `--hops` to change it to a higher value. More than one hop is possible if there are number of routers betwean TRex client side and TRex server side.
============
This feature ensures that:
* Packets get out of DUT in order (from each flow perspective).
* There are no packet drops (no need to wait for the end of the test). Without this flag, you must wait for the end of the test in order to identify packet drops, because there is always a difference between TX and Rx, due to RTT.
.Full example
[source,bash]
----
$sudo ./t-rex-64 -f avl/sfr_delay_10_1g.yaml -c 4 -p -l 100 -d 100000 -m 30 --rx-check 128
----
[source,python]
----
Cpu Utilization : 0.1 % <1>
if| tx_ok , rx_ok , rx ,error, average , max , Jitter , max window
| , , check, , latency(usec),latency (usec) ,(usec) ,
--------------------------------------------------------------------------------
0 | 1002, 1002, 2501, 0, 61 , 70, 3 | 60
1 | 1002, 1002, 2012, 0, 56 , 63, 2 | 50
2 | 1002, 1002, 2322, 0, 66 , 74, 5 | 68
3 | 1002, 1002, 1727, 0, 58 , 68, 2 | 52
Rx Check stats enabled <2>
-------------------------------------------------------------------------------------------
rx check: avg/max/jitter latency, 94 , 744, 49 | 252 287 309 <3>
active flows: <6> 10, fif: <5> 308, drop: 0, errors: 0 <4>
-------------------------------------------------------------------------------------------
----
<1> CPU% of the Rx thread. If it is too high, *increase* the sample rate.
<2> Rx Check section. For more detailed info, press 'r' during the test or at the end of the test.
<3> Average latency, max latency, jitter on the template flows in microseconds. This is usually *higher* than the latency check packet because the feature works more on this packet.
<4> Drop counters and errors counter should be zero. If not, press 'r' to see the full report or view the report at the end of the test.
<5> fif - First in flow. Number of new flows handled by the Rx thread.
<6> active flows - number of active flows handled by rx thread
.Press R to Display Full Report
[source,python]
----
m_total_rx : 2
m_lookup : 2
m_found : 1
m_fif : 1
m_add : 1
m_remove : 1
m_active : 0
<1>
0 0 0 0 1041 0 0 0 0 0 0 0 0 min_delta : 10 usec
cnt : 2
high_cnt : 2
max_d_time : 1041 usec
sliding_average : 1 usec <2>
precent : 100.0 %
histogram
-----------
h[1000] : 2
tempate_id_ 0 , errors: 0, jitter: 61 <3>
tempate_id_ 1 , errors: 0, jitter: 0
tempate_id_ 2 , errors: 0, jitter: 0
tempate_id_ 3 , errors: 0, jitter: 0
tempate_id_ 4 , errors: 0, jitter: 0
tempate_id_ 5 , errors: 0, jitter: 0
tempate_id_ 6 , errors: 0, jitter: 0
tempate_id_ 7 , errors: 0, jitter: 0
tempate_id_ 8 , errors: 0, jitter: 0
tempate_id_ 9 , errors: 0, jitter: 0
tempate_id_10 , errors: 0, jitter: 0
tempate_id_11 , errors: 0, jitter: 0
tempate_id_12 , errors: 0, jitter: 0
tempate_id_13 , errors: 0, jitter: 0
tempate_id_14 , errors: 0, jitter: 0
tempate_id_15 , errors: 0, jitter: 0
ager :
m_st_alloc : 1
m_st_free : 0
m_st_start : 2
m_st_stop : 1
m_st_handle : 0
----
<1> Errors, if any, shown here
<2> Low pass filter on the active average of latency events
<3> Error per template info
// IGNORE: this line added to help rendition. Without this line, the "Notes and Limitations" section below does not appear.
*Notes and Limitations:*::
** To receive the packets TRex does the following:
*** Changes the TTL to 0xff and expects 0xFF (loopback) or oxFE (route). (Use `--hop` to configure this value.)
*** Adds 24 bytes of metadata as ipv4/ipv6 option header.
// clarify "ipv4/ipv6 option header" above
== Reference
=== Traffic YAML (parameter of -f option)
==== Global Traffic YAML section
[source,python]
----
- duration : 10.0 <1>
generator : <2>
distribution : "seq"
clients_start : "16.0.0.1"
clients_end : "16.0.0.255"
servers_start : "48.0.0.1"
servers_end : "48.0.0.255"
clients_per_gb : 201
min_clients : 101
dual_port_mask : "1.0.0.0"
tcp_aging : 1
udp_aging : 1
cap_ipg : true <3>
cap_ipg_min : 30 <4>
cap_override_ipg : 200 <5>
vlan : { enable : 1 , vlan0 : 100 , vlan1 : 200 } <6>
mac_override_by_ip : true <7>
----
<1> Test duration (seconds). Can be overridden using the `-d` option.
<2> See full explanation on generator section link:trex_manual.html#_clients_servers_ip_allocation_scheme[here].
<3> true (default) indicates that the IPG is taken from the cap file (also taking into account cap_ipg_min and cap_override_ipg if they exist). false indicates that IPG is taken from per template section.
<4> The following two options can set the min ipg in microseconds: (if (pkt_ipg<cap_ipg_min) { pkt_ipg=cap_override_ipg} )
<5> Value to override (microseconds), as described in note above.
<6> Enable load balance feature. See xref:trex_load_bal[trex load balance section] for info.
<7> Enable MAC address replacement by client IP.
==== Timer Wheel section configuration
(from v2.13)
see xref:timer_w[Timer Wheel section]
==== Per template section
// clarify "per template"
[source,python]
----
- name: cap2/dns.pcap <1>
cps : 10.0 <2>
ipg : 10000 <3>
rtt : 10000 <4>
w : 1 <5>
server_addr : "48.0.0.7" <6>
one_app_server : true <7>
----
<1> The name of the template pcap file. Can be relative path from the t-rex-64 image directory, or an absolute path. The pcap file should include only one flow. (Exception: in case of plug-ins).
<2> Connection per second. This is the value that will be used if specifying -m 1 from command line (giving -m x will multiply this
<3> If the global section of the YAML file includes `cap_ipg : false`, this line sets the inter-packet gap in microseconds.
<4> Should be set to the same value as ipg (microseconds).
<5> Default value: w=1. This indicates to the IP generator how to generate the flows. If w=2, two flows from the same template will be generated in a burst (more for HTTP that has burst of flows).
<6> If `one_app_server` is set to true, then all templates will use the same server.
<7> If the same server address is required, set this value to true.
=== Configuration YAML (parameter of --cfg option)
anchor:trex_config[]
The configuration file, in YAML format, configures TRex behavior, including:
- IP address or MAC address for each port (source and destination).
- Masked interfaces, to ensure that TRex does not try to use the management ports as traffic ports.
- Changing the zmq/telnet TCP port.
You specify which config file to use by adding --cfg <file name> to the command line arguments. +
If no --cfg given, the default `/etc/trex_cfg.yaml` is used. +
Configuration file examples can be found in the `$TREX_ROOT/scripts/cfg` folder.
==== Basic Configurations
[source,python]
----
- port_limit : 2 #mandatory <1>
version : 2 #mandatory <2>
interfaces : ["03:00.0", "03:00.1"] #mandatory <3>
#enable_zmq_pub : true #optional <4>
#zmq_pub_port : 4500 #optional <5>
#prefix : setup1 #optional <6>
#limit_memory : 1024 #optional <7>
c : 4 #optional <8>
port_bandwidth_gb : 10 #optional <9>
port_info : # set eh mac addr mandatory
- default_gw : 1.1.1.1 # port 0 <10>
dest_mac : '00:00:00:01:00:00' # Either default_gw or dest_mac is mandatory <10>
src_mac : '00:00:00:02:00:00' # optional <11>
ip : 2.2.2.2 # optional <12>
vlan : 15 # optional <13>
- dest_mac : '00:00:00:03:00:00' # port 1
src_mac : '00:00:00:04:00:00'
- dest_mac : '00:00:00:05:00:00' # port 2
src_mac : '00:00:00:06:00:00'
- dest_mac : [0x0,0x0,0x0,0x7,0x0,0x01] # port 3 <14>
src_mac : [0x0,0x0,0x0,0x8,0x0,0x02] # <14>
----
<1> Number of ports. Should be equal to the number of interfaces listed in 3. - mandatory
<2> Must be set to 2. - mandatory
<3> List of interfaces to use. Run `sudo ./dpdk_setup_ports.py --show` to see the list you can choose from. - mandatory
<4> Enable the ZMQ publisher for stats data, default is true.
<5> ZMQ port number. Default value is good. If running two TRex instances on the same machine, each should be given distinct number. Otherwise, can remove this line.
<6> If running two TRex instances on the same machine, each should be given distinct name. Otherwise, can remove this line. ( Passed to DPDK as --file-prefix arg)
<7> Limit the amount of packet memory used. (Passed to dpdk as -m arg)
<8> Number of threads (cores) TRex will use per interface pair ( Can be overridden by -c command line option )
<9> The bandwidth of each interface in Gbs. In this example we have 10Gbs interfaces. For VM, put 1. Used to tune the amount of memory allocated by TRex.
<10> TRex need to know the destination MAC address to use on each port. You can specify this in one of two ways: +
Specify dest_mac directly. +
Specify default_gw (since version 2.10). In this case (only if no dest_mac given), TRex will issue ARP request to this IP, and will use
the result as dest MAC. If no dest_mac given, and no ARP response received, TRex will exit.
<11> Source MAC to use when sending packets from this interface. If not given (since version 2.10), MAC address of the port will be used.
<12> If given (since version 2.10), TRex will issue gratitues ARP for the ip + src MAC pair on appropriate port. In stateful mode,
gratitues ARP for each ip will be sent every 120 seconds (Can be changed using --arp-refresh-period argument).
<13> If given (since version 2.18), all traffic on the port will be sent with this VLAN tag.
<14> Old MAC address format. New format is supported since version v2.09.
[NOTE]
=========================================================================================
If you use version earlier than 2.10, or choose to omit the ``ip''
and have mac based configuration, be aware that TRex will not send any
gratitues ARP and will not answer ARP requests. In this case, you must configure static
ARP entries pointing to TRex port on your DUT. For an example config, you can look
xref:trex_config[here].
=========================================================================================
To find out which interfaces (NIC ports) can be used, perform the following:
[source,bash]
----
$>sudo ./dpdk_setup_ports.py --show
Network devices using DPDK-compatible driver
============================================
Network devices using kernel driver
===================================
0000:02:00.0 '82545EM Gigabit Ethernet Controller' if=eth2 drv=e1000 unused=igb_uio *Active* #<1>
0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb #<2>
0000:03:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
0000:13:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
0000:13:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' drv= unused=ixgb
Other network devices
=====================
<none>
----
<1> We see that 02:00.0 is active (our management port).
<2> All other NIC ports (03:00.0, 03:00.1, 13:00.0, 13:00.1) can be used.
minimum configuration file is:
[source,bash]
----
<none>
- port_limit : 4
version : 2
interfaces : ["03:00.0","03:00.1","13:00.1","13:00.0"]
----
==== Memory section configuration
The memory section is optional. It is used when there is a need to tune the amount of memory used by TRex packet manager.
Default values (from the TRex source code), are usually good for most users. Unless you have some unusual needs, you can
eliminate this section.
[source,python]
----
- port_limit : 2
version : 2
interfaces : ["03:00.0","03:00.1"]
memory : <1>
mbuf_64 : 16380 <2>
mbuf_128 : 8190
mbuf_256 : 8190
mbuf_512 : 8190
mbuf_1024 : 8190
mbuf_2048 : 4096
traffic_mbuf_64 : 16380 <3>
traffic_mbuf_128 : 8190
traffic_mbuf_256 : 8190
traffic_mbuf_512 : 8190
traffic_mbuf_1024 : 8190
traffic_mbuf_2048 : 4096
dp_flows : 1048576 <4>
global_flows : 10240 <5>
----
<1> Memory section header
<2> Numbers of memory buffers allocated for packets in transit, per port pair. Numbers are specified per packet size.
<3> Numbers of memory buffers allocated for holding the part of the packet which is remained unchanged per template.
You should increase numbers here, only if you have very large amount of templates.
<4> Number of TRex flow objects allocated (To get best performance they are allocated upfront, and not dynamically).
If you expect more concurrent flows than the default (1048576), enlarge this.
<5> Number objects TRex allocates for holding NAT ``in transit'' connections. In stateful mode, TRex learn NAT
translation by looking at the address changes done by the DUT to the first packet of each flow. So, these are the
number of flows for which TRex sent the first flow packet, but did not learn the translation yet. Again, default
here (10240) should be good. Increase only if you use NAT and see issues.
==== Platform section configuration
The platform section is optional. It is used to tune the performance and allocate the cores to the right NUMA
a configuration file now has the folowing struct to support multi instance
[source,python]
----
- version : 2
interfaces : ["03:00.0","03:00.1"]
port_limit : 2
....
platform : <1>
master_thread_id : 0 <2>
latency_thread_id : 5 <3>
dual_if : <4>
- socket : 0 <5>
threads : [1,2,3,4] <6>
----
<1> Platform section header.
<2> Hardware thread_id for control thread.
<3> Hardware thread_id for RX thread.
<4> ``dual_if'' section defines info for interface pairs (according to the order in ``interfaces'' list).
each section, starting with ``- socket'' defines info for different interface pair.
<5> The NUMA node from which memory will be allocated for use by the interface pair.
<6> Hardware threads to be used for sending packets for the interface pair. Threads are pinned to cores, so specifying threads
actually determines the hardware cores.
*Real example:* anchor:numa-example[]
We connected 2 Intel XL710 NICs close to each other on the motherboard. They shared the same NUMA:
image:images/same_numa.png[title="2_NICSs_same_NUMA"]
CPU utilization was very high ~100%, with c=2 and c=4 the results were same.
Then, we moved the cards to different NUMAs:
image:images/different_numa.png[title="2_NICSs_different_NUMAs"]
*+*
We added configuration to the /etc/trex_cfg.yaml:
[source,python]
platform :
master_thread_id : 0
latency_thread_id : 8
dual_if :
- socket : 0
threads : [1, 2, 3, 4, 5, 6, 7]
- socket : 1
threads : [9, 10, 11, 12, 13, 14, 15]
This gave best results: with *\~98 Gb/s* TX BW and c=7, CPU utilization became *~21%*! (40% with c=4)
==== Timer Wheeel section configuration
anchor:timer_w[]
The memory section is optional. It is used when there is a need to tune the amount of memory used by TRex packet manager.
Default values (from the TRex source code), are usually good for most users. Unless you have some unusual needs, you can
eliminate this section.
==== Timer Wheel section configuration
The flow scheduler uses timer wheel to schedule flows. To tune it for a large number of flows it is possible to change the default values.
This is an advance configuration, don't use it if you don't know what you are doing. it can be configure in trex_cfg file and trex traffic profile.
[source,python]
----
tw :
buckets : 1024 <1>
levels : 3 <2>
bucket_time_usec : 20.0 <3>
----
<1> the number of buckets in each level, higher number will improve performance, but will reduce the maximum levels.
<2> how many levels.
<3> bucket time in usec. higher number will create more bursts
=== Command line options
anchor:cml-line[]
*--allow-coredump*::
Allow creation of core dump.
*--arp-refresh-period <num>*::
Period in seconds between sending of gratuitous ARP for our addresses. Value of 0 means ``never send``.
*-c <num>*::
Number of hardware threads to use per interface pair. Use at least 4 for TRex 40Gbs. +
TRex uses 2 threads for inner needs. Rest of the threads can be used. Maximum number here, can be number of free threads
divided by number of interface pairs. +
For virtual NICs on VM, we always use one thread per interface pair.
*--cfg <file name>*::
TRex configuration file to use. See relevant manual section for all config file options.
*--checksum-offload*::
Enable IP, TCP and UDP tx checksum offloading, using DPDK. This requires all used interfaces to support this.
*--client_cfg <file>*::
YAML file describing clients configuration. Look link:trex_manual.html#_client_clustering_configuration[here] for details.
*-d <num>*::
Duration of the test in seconds.
*-e*::
Same as `-p`, but change the src/dst IP according to the port. Using this, you will get all the packets of the
same flow from the same port, and with the same src/dst IP. +
It will not work good with NBAR as it expects all clients ip to be sent from same direction.
*-f <yaml file>*::
Specify traffic YAML configuration file to use. Mandatory option for stateful mode.
*--hops <num>*::
Provide number of hops in the setup (default is one hop). Relevant only if the Rx check is enabled.
Look link:trex_manual.html#_flow_order_latency_verification[here] for details.
*--iom <mode>*::
I/O mode. Possible values: 0 (silent), 1 (normal), 2 (short).
*--ipv6*::
Convert templates to IPv6 mode.
*-k <num>*::
Run ``warm up'' traffic for num seconds before starting the test. This is needed if TRex is connected to switch running
spanning tree. You want the switch to see traffic from all relevant source MAC addresses before starting to send real
data. Traffic sent is the same used for the latency test (-l option) +
Current limitation (holds for TRex version 1.82): does not work properly on VM.
*-l <rate>*::
In parallel to the test, run latency check, sending packets at rate/sec from each interface.
*--learn-mode <mode>*::
Learn the dynamic NAT translation. Look link:trex_manual.html#_nat_support[here] for details.
*--learn-verify*::
Used for testing the NAT learning mechanism. Do the learning as if DUT is doing NAT, but verify that packets
are not actually changed.
*--limit-ports <port num>*::
Limit the number of ports used. Overrides the ``port_limit'' from config file.
*--lm <hex bit mask>*::
Mask specifying which ports will send traffic. For example, 0x1 - Only port 0 will send. 0x4 - only port 2 will send.
This can be used to verify port connectivity. You can send packets from one port, and look at counters on the DUT.
*--lo*::
Latency only - Send only latency packets. Do not send packets from the templates/pcap files.
*-m <num>*::
Rate multiplier. TRex will multiply the CPS rate of each template by num.
*--nc*::
If set, will terminate exacly at the end of the specified duration.
This provides faster, more accurate TRex termination.
By default (without this option), TRex waits for all flows to terminate gracefully. In case of a very long flow, termination might prolong.
*--no-flow-control-change*::
Prevents TRex from changing flow control. By default (without this option), TRex disables flow control at startup for all cards, except for the Intel XL710 40G card.
*--no-key*:: Daemon mode, don't get input from keyboard.
*--no-watchdog*:: Disable watchdog.
*-p*::
Send all packets of the same flow from the same direction. For each flow, TRex will randomly choose between client port and
server port, and send all the packets from this port. src/dst IPs keep their values as if packets are sent from two ports.
Meaning, we get on the same port packets from client to server, and from server to client. +
If you are using this with a router, you can not relay on routing rules to pass traffic to TRex, you must configure policy
based routes to pass all traffic from one DUT port to the other. +
*-pm <num>*::
Platform factor. If the setup includes splitter, you can multiply all statistic number displayed by TRex by this factor, so that they will match the DUT counters.
*-pubd*::
Disable ZMQ monitor's publishers.
*--rx-check <sample rate>*::
Enable Rx check module. Using this, each thread randomly samples 1/sample_rate of the flows and checks packet order, latency, and additional statistics for the sampled flows.
Note: This feature works on the RX thread.
*-v <verbosity level>*::
Show debug info. Value of 1 shows debug info on startup. Value of 3, shows debug info during run at some cases. Might slow down operation.
*--vlan*:: Relevant only for stateless mode with Intel 82599 10G NIC.
When configuring flow stat and latency per stream rules, assume all streams uses VLAN.
*-w <num seconds>*::
Wait additional time between NICs initialization and sending traffic. Can be useful if DUT needs extra setup time. Default is 1 second.
*--active-flows*::
An experimental switch to scale up or down the number of active flows.
It is not accurate due to the quantization of flow scheduler and in some case does not work.
Example --active-flows 500000 wil set the ballpark of the active flow to be ~0.5M
ifndef::backend-docbook[]
endif::backend-docbook[]
== Appendix
=== Simulator
The TRex simulator is a linux application (no DPDK needed) that can run on any Linux (it can also run on TRex machine itself).
you can create output pcap file from input of traffic YAML.
==== Simulator
[source,bash]
----
$./bp-sim-64-debug -f avl/sfr_delay_10_1g.yaml -v 1
-- loading cap file avl/delay_10_http_get_0.pcap
-- loading cap file avl/delay_10_http_post_0.pcap
-- loading cap file avl/delay_10_https_0.pcap
-- loading cap file avl/delay_10_http_browsing_0.pcap
-- loading cap file avl/delay_10_exchange_0.pcap
-- loading cap file avl/delay_10_mail_pop_0.pcap
-- loading cap file avl/delay_10_mail_pop_1.pcap
-- loading cap file avl/delay_10_mail_pop_2.pcap
-- loading cap file avl/delay_10_oracle_0.pcap
-- loading cap file avl/delay_10_rtp_160k_full.pcap
-- loading cap file avl/delay_10_rtp_250k_full.pcap
-- loading cap file avl/delay_10_smtp_0.pcap
-- loading cap file avl/delay_10_smtp_1.pcap
-- loading cap file avl/delay_10_smtp_2.pcap
-- loading cap file avl/delay_10_video_call_0.pcap
-- loading cap file avl/delay_10_sip_video_call_full.pcap
-- loading cap file avl/delay_10_citrix_0.pcap
-- loading cap file avl/delay_10_dns_0.pcap
id,name , tps, cps,f-pkts,f-bytes, duration, Mb/sec, MB/sec, c-flows, PPS,total-Mbytes-duration,errors,flows #<2>
00, avl/delay_10_http_get_0.pcap ,404.52,404.52, 44 , 37830 , 0.17 , 122.42 , 15.30 , 67 , 17799 , 2 , 0 , 1
01, avl/delay_10_http_post_0.pcap ,404.52,404.52, 54 , 48468 , 0.21 , 156.85 , 19.61 , 85 , 21844 , 2 , 0 , 1
02, avl/delay_10_https_0.pcap ,130.87,130.87, 96 , 91619 , 0.22 , 95.92 , 11.99 , 29 , 12564 , 1 , 0 , 1
03, avl/delay_10_http_browsing_0.pcap ,709.89,709.89, 37 , 34425 , 0.13 , 195.50 , 24.44 , 94 , 26266 , 2 , 0 , 1
04, avl/delay_10_exchange_0.pcap ,253.81,253.81, 43 , 9848 , 1.57 , 20.00 , 2.50 , 400 , 10914 , 0 , 0 , 1
05, avl/delay_10_mail_pop_0.pcap ,4.76,4.76, 20 , 5603 , 0.17 , 0.21 , 0.03 , 1 , 95 , 0 , 0 , 1
06, avl/delay_10_mail_pop_1.pcap ,4.76,4.76, 114 , 101517 , 0.25 , 3.86 , 0.48 , 1 , 543 , 0 , 0 , 1
07, avl/delay_10_mail_pop_2.pcap ,4.76,4.76, 30 , 15630 , 0.19 , 0.60 , 0.07 , 1 , 143 , 0 , 0 , 1
08, avl/delay_10_oracle_0.pcap ,79.32,79.32, 302 , 56131 , 6.86 , 35.62 , 4.45 , 544 , 23954 , 0 , 0 , 1
09, avl/delay_10_rtp_160k_full.pcap ,2.78,8.33, 1354 , 1232757 , 61.24 , 27.38 , 3.42 , 170 , 3759 , 0 , 0 , 3
10, avl/delay_10_rtp_250k_full.pcap ,1.98,5.95, 2069 , 1922000 , 61.38 , 30.48 , 3.81 , 122 , 4101 , 0 , 0 , 3
11, avl/delay_10_smtp_0.pcap ,7.34,7.34, 22 , 5618 , 0.19 , 0.33 , 0.04 , 1 , 161 , 0 , 0 , 1
12, avl/delay_10_smtp_1.pcap ,7.34,7.34, 35 , 18344 , 0.21 , 1.08 , 0.13 , 2 , 257 , 0 , 0 , 1
13, avl/delay_10_smtp_2.pcap ,7.34,7.34, 110 , 96544 , 0.27 , 5.67 , 0.71 , 2 , 807 , 0 , 0 , 1
14, avl/delay_10_video_call_0.pcap ,11.90,11.90, 2325 , 2532577 , 36.56 , 241.05 , 30.13 , 435 , 27662 , 3 , 0 , 1
15, avl/delay_10_sip_video_call_full.pcap ,29.35,58.69, 1651 , 120315 , 24.56 , 28.25 , 3.53 , 721 , 48452 , 0 , 0 , 2
16, avl/delay_10_citrix_0.pcap ,43.62,43.62, 272 , 84553 , 6.23 , 29.51 , 3.69 , 272 , 11866 , 0 , 0 , 1
17, avl/delay_10_dns_0.pcap ,1975.02,1975.02, 2 , 162 , 0.01 , 2.56 , 0.32 , 22 , 3950 , 0 , 0 , 1
00, sum ,4083.86,93928.84, 8580 , 6413941 , 0.00 , 997.28 , 124.66 , 2966 , 215136 , 12 , 0 , 23
Memory usage
size_64 : 1687
size_128 : 222
size_256 : 798
size_512 : 1028
size_1024 : 86
size_2048 : 4086
Total : 8.89 Mbytes 159% util #<1>
----
<1> the memory usage of the templates
<2> CSV for all the templates
=== firmware update to XL710/X710
anchor:xl710-firmware[]
To upgrade the firmware follow this
==== Download the driver
*Download driver i40e from link:https://downloadcenter.intel.com/download/24411/Network-Adapter-Driver-for-PCI-E-40-Gigabit-Network-Connections-under-Linux-[here]
*Build the kernel module
[source,bash]
----
$tar -xvzf i40e-1.3.47
$cd i40e-1.3.47/src
$make
$sudo insmod i40e.ko
----
==== Bind the NIC to Linux
In this stage we bind the NIC to Linux (take it from DPDK)
[source,bash]
----
$sudo ./dpdk_nic_bind.py --status # show the ports
Network devices using DPDK-compatible driver
============================================
0000:02:00.0 'Device 1583' drv=igb_uio unused= #<1>
0000:02:00.1 'Device 1583' drv=igb_uio unused= #<2>
0000:87:00.0 'Device 1583' drv=igb_uio unused=
0000:87:00.1 'Device 1583' drv=igb_uio unused=
$sudo dpdk_nic_bind.py -u 02:00.0 02:00.1 #<3>
$sudo dpdk_nic_bind.py -b i40e 02:00.0 02:00.1 #<4>
$ethtool -i p1p2 #<5>
driver: i40e
version: 1.3.47
firmware-version: 4.24 0x800013fc 0.0.0 #<6>
bus-info: 0000:02:00.1
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
$ethtool -S p1p2
$lspci -s 02:00.0 -vvv #<7>
----
<1> XL710 ports that need to unbind from DPDK
<2> XL710 ports that need to unbind from DPDK
<3> Unbind from DPDK using this command
<4> Bind to linux to i40e driver
<5> Show firmware version throw linux driver
<6> Firmare version
<7> More info
==== Upgrade
Download NVMUpdatePackage.zip from Intel site link:http://downloadcenter.intel.com/download/24769/NVM-Update-Utility-for-Intel-Ethernet-Converged-Network-Adapter-XL710-X710-Series[here]
It includes the utility `nvmupdate64e`
Run this:
[source,bash]
----
$sudo ./nvmupdate64e
----
You might need a power cycle and to run this command a few times to get the latest firmware
==== QSFP+ support for XL710
see link:https://www.google.co.il/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0ahUKEwjJhPSH3b3LAhUp7nIKHSkACUYQFggaMAA&url=http%3A%2F%2Fwww.intel.co.id%2Fcontent%2Fdam%2Fwww%2Fpublic%2Fus%2Fen%2Fdocuments%2Frelease-notes%2Fxl710-ethernet-controller-feature-matrix.pdf&usg=AFQjCNFhwozfz-XuKGMOy9_MJDbetw15Og&sig2=ce7YU9F9Et6xf6KvqSFBxg&bvm=bv.116636494,d.bGs[QSFP+ support] for QSFP+ support and Firmware requirement for XL710
=== TRex with ASA 5585
When running TRex aginst ASA 5585, you have to notice following things:
* ASA can't forward ipv4 options, so there is a need to use --learn-mode 1 (or 3) in case of NAT. In this mode, bidirectional UDP flows are not supported.
--learn-mode 1 support TCP sequence number randomization in both sides of the connection (client to server and server client). For this to work, TRex must learn
the translation of packets from both sides, so this mode reduce the amount of connections per second TRex can generate (The number is still high enough to test
any existing firewall). If you need higher cps rate, you can use --learn-mode 3. This mode handles sequence number randomization on client->server side only.
* Latency should be tested using ICMP with `--l-pkt-mode 2`
==== ASA 5585 sample configuration
[source,bash]
----
ciscoasa# show running-config
: Saved
:
: Serial Number: JAD194801KX
: Hardware: ASA5585-SSP-10, 6144 MB RAM, CPU Xeon 5500 series 2000 MHz, 1 CPU (4 cores)
:
ASA Version 9.5(2)
!
hostname ciscoasa
enable password 8Ry2YjIyt7RRXU24 encrypted
passwd 2KFQnbNIdI.2KYOU encrypted
names
!
interface Management0/0
management-only
nameif management
security-level 100
ip address 10.56.216.106 255.255.255.0
!
interface TenGigabitEthernet0/8
nameif inside
security-level 100
ip address 15.0.0.1 255.255.255.0
!
interface TenGigabitEthernet0/9
nameif outside
security-level 0
ip address 40.0.0.1 255.255.255.0
!
boot system disk0:/asa952-smp-k8.bin
ftp mode passive
pager lines 24
logging asdm informational
mtu management 1500
mtu inside 9000
mtu outside 9000
no failover
no monitor-interface service-module
icmp unreachable rate-limit 1 burst-size 1
no asdm history enable
arp outside 40.0.0.2 90e2.baae.87d1
arp inside 15.0.0.2 90e2.baae.87d0
arp timeout 14400
no arp permit-nonconnected
route management 0.0.0.0 0.0.0.0 10.56.216.1 1
route inside 16.0.0.0 255.0.0.0 15.0.0.2 1
route outside 48.0.0.0 255.0.0.0 40.0.0.2 1
timeout xlate 3:00:00
timeout pat-xlate 0:00:30
timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 sctp 0:02:00 icmp 0:00:02
timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00
timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00
timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute
timeout tcp-proxy-reassembly 0:01:00
timeout floating-conn 0:00:00
user-identity default-domain LOCAL
http server enable
http 192.168.1.0 255.255.255.0 management
no snmp-server location
no snmp-server contact
crypto ipsec security-association pmtu-aging infinite
crypto ca trustpool policy
telnet 0.0.0.0 0.0.0.0 management
telnet timeout 5
ssh stricthostkeycheck
ssh timeout 5
ssh key-exchange group dh-group1-sha1
console timeout 0
!
tls-proxy maximum-session 1000
!
threat-detection basic-threat
threat-detection statistics access-list
no threat-detection statistics tcp-intercept
dynamic-access-policy-record DfltAccessPolicy
!
class-map icmp-class
match default-inspection-traffic
class-map inspection_default
match default-inspection-traffic
!
!
policy-map type inspect dns preset_dns_map
parameters
message-length maximum client auto
message-length maximum 512
policy-map icmp_policy
class icmp-class
inspect icmp
policy-map global_policy
class inspection_default
inspect dns preset_dns_map
inspect ftp
inspect h323 h225
inspect h323 ras
inspect rsh
inspect rtsp
inspect esmtp
inspect sqlnet
inspect skinny
inspect sunrpc
inspect xdmcp
inspect sip
inspect netbios
inspect tftp
inspect ip-options
!
service-policy global_policy global
service-policy icmp_policy interface outside
prompt hostname context
!
jumbo-frame reservation
!
no call-home reporting anonymous
: end
ciscoasa#
----
==== TRex commands example
Using these commands the configuration is:
1. NAT learn mode (TCP-ACK)
2. Delay of 1 second at start up (-k 1). It was added because ASA drops the first packets.
3. Latency is configured to ICMP reply mode (--l-pkt-mode 2).
*Simple HTTP:*::
[source,bash]
----
$sudo ./t-rex-64 -f cap2/http_simple.yaml -d 1000 -l 1000 --l-pkt-mode 2 -m 1000 --learn-mode 1 -k 1
----
This is more realistic traffic for enterprise (we removed from SFR file the bidirectional UDP traffic templates, which (as described above), are not supported in this mode).
*Enterprise profile:*::
[source,bash]
----
$sudo ./t-rex-64 -f avl/sfr_delay_10_1g_asa_nat.yaml -d 1000 -l 1000 --l-pkt-mode 2 -m 4 --learn-mode 1 -k 1
----
The TRex output
[source,bash]
----
-Per port stats table
ports | 0 | 1
-----------------------------------------------------------------------------------------
opackets | 106347896 | 118369678
obytes | 33508291818 | 118433748567
ipackets | 118378757 | 106338782
ibytes | 118434305375 | 33507698915
ierrors | 0 | 0
oerrors | 0 | 0
Tx Bw | 656.26 Mbps | 2.27 Gbps
-Global stats enabled
Cpu Utilization : 18.4 % 31.7 Gb/core
Platform_factor : 1.0
Total-Tx : 2.92 Gbps NAT time out : 0 #<1> (0 in wait for syn+ack) #<1>
Total-Rx : 2.92 Gbps NAT aged flow id: 0 #<1>
Total-PPS : 542.29 Kpps Total NAT active: 163 (12 waiting for syn)
Total-CPS : 8.30 Kcps Nat_learn_errors: 0
Expected-PPS : 539.85 Kpps
Expected-CPS : 8.29 Kcps
Expected-BPS : 2.90 Gbps
Active-flows : 7860 Clients : 255 Socket-util : 0.0489 %
Open-flows : 3481234 Servers : 5375 Socket : 7860 Socket/Clients : 30.8
drop-rate : 0.00 bps #<1>
current time : 425.1 sec
test duration : 574.9 sec
-Latency stats enabled
Cpu Utilization : 0.3 %
if| tx_ok , rx_ok , rx ,error, average , max , Jitter , max window
| , , check, , latency(usec),latency (usec) ,(usec) ,
----------------------------------------------------------------------------------------------------------------
0 | 420510, 420495, 0, 1, 58 , 1555, 14 | 240 257 258 258 219 930 732 896 830 472 190 207 729
1 | 420496, 420509, 0, 1, 51 , 1551, 13 | 234 253 257 258 214 926 727 893 826 468 187 204 724
----
<1> These counters should be zero
anchor:fedora21_example[]
=== Fedora 21 Server installation
Download the .iso file from link above, boot with it using Hypervisor or CIMC console. +
Troubleshooting -> install in basic graphics mode
* In packages selection, choose:
** C Development Tools and Libraries
** Development Tools
** System Tools
* Set Ethernet configuration if needed
* Use default hard-drive partitions, reclaim space if needed
* After installation, edit file /etc/selinux/config +
set: +
SELINUX=disabled
* Run: +
systemctl disable firewalld
* Edit file /etc/yum.repos.d/fedora-updates.repo +
set everywhere: +
enabled=0
* Reboot
=== Configure Linux host as network emulator
There are lots of Linux tutorials on the web, so this will not be full tutorial, only highlighting some key points. Commands
were checked on Ubuntu system.
For this example:
1. TRex Client side network is 16.0.0.x
2. TRex Server side network is 48.0.0.x
3. Linux Client side network eth0 is configured with IPv4 as 172.168.0.1
4. Linux Server side network eth1 is configured with IPv4 as 10.0.0.1
[source,bash]
----
TRex-0 (16.0.0.1->48.0.0.1 ) <-->
( 172.168.0.1/255.255.0.0)-eth0 [linux] -( 10.0.0.1/255.255.0.0)-eth1
<--> TRex-1 (16.0.0.1<-48.0.0.1)
----
==== Enable forwarding
One time (will be discarded after reboot): +
[source,bash]
----
echo 1 > /proc/sys/net/ipv4/ip_forward
----
To make this permanent, add the following line to the file /etc/sysctl.conf: +
----
net.ipv4.ip_forward=1
----
==== Add static routes
Example if for the default TRex networks, 48.0.0.0 and 16.0.0.0.
Routing all traffic from 48.0.0.0 to the gateway 10.0.0.100
[source,bash]
----
route add -net 48.0.0.0 netmask 255.255.0.0 gw 10.0.0.100
----
Routing all traffic from 16.0.0.0 to the gateway 172.168.0.100
[source,bash]
----
route add -net 16.0.0.0 netmask 255.255.0.0 gw 172.168.0.100
----
If you use stateless mode, and decide to add route only in one direction, remember to disable reverse path check. +
For example, to disable on all interfaces:
[source,bash]
----
for i in /proc/sys/net/ipv4/conf/*/rp_filter ; do
echo 0 > $i
done
----
Alternatively, you can edit /etc/network/interfaces, and add something like this for both ports connected to TRex.
This will take effect, only after restarting networking (rebooting the machine in an alternative also).
----
auto eth1
iface eth1 inet static
address 16.0.0.100
netmask 255.0.0.0
network 16.0.0.0
broadcast 16.255.255.255
... same for 48.0.0.0
----
==== Add static ARP entries
[source,bash]
----
sudo arp -s 10.0.0.100 <Second TRex port MAC>
sudo arp -s 172.168.0.100 <TRex side the NICs are not visible to ifconfig, run:
----
=== Configure Linux to use VF on Intel X710 and 82599 NICs
TRex supports paravirtualized interfaces such as VMXNET3/virtio/E1000 however when connected to a vSwitch, the vSwitch limits the performance. VPP or OVS-DPDK can improve the performance but require more software resources to handle the rate.
SR-IOV can accelerate the performance and reduce CPU resource usage as well as latency by utilizing NIC hardware switch capability (the switching is done by hardware).
TRex version 2.15 now includes SR-IOV support for XL710 and X710.
The following diagram compares between vSwitch and SR-IOV.
image:images/sr_iov_vswitch.png[title="vSwitch_main",width=850]
One use case which shows the performance gain that can be acheived by using SR-IOV is when a user wants to create a pool of TRex VMs that tests a pool of virtual DUTs (e.g. ASAv,CSR etc.)
When using newly supported SR-IOV, compute, storage and networking resources can be controlled dynamically (e.g by using OpenStack)
image:images/sr_iov_trex.png[title="vSwitch_main",width=850]
The above diagram is an example of one server with two NICS. TRex VMs can be allocated on one NIC while the DUTs can be allocated on another.
Following are some links we used and lessons we learned while putting up an environment for testing TRex with VF interfaces (using SR-IOV).
This is by no means a full toturial of VF usage, and different Linux distributions might need slightly different handling.
==== Links and resources
link:http://www.intel.com/content/dam/www/public/us/en/documents/technology-briefs/xl710-sr-iov-config-guide-gbe-linux-brief.pdf[This]
is a good tutorial by Intel of SR-IOV and how to configure. +
link:http://dpdk.org/doc/guides/nics/intel_vf.html[This] is a tutorial from DPDK documentation. +
==== Linux configuration
First, need to verify BIOS support for the feature. Can consult link:http://kpanic.de/node/8[this link] for directions. +
Second, need to make sure you have the correct kernel options. +
We added the following options to the kernel boot command on Grub: ``iommu=pt intel_iommu=on pci_pt_e820_access=on''. This
was needed on Fedora and Ubuntu. When using Centos, adding these options was not needed. +
To load the kernel module with the correct VF parameters after reboot add following line ``options i40e max_vfs=1,1'' to some file in ``/etc/modprobe.d/'' +
On Centos, we needed to also add the following file (example for x710 ): +
[source,bash]
----
cat /etc/sysconfig/modules/i40e.modules
#!/bin/sh
rmmod i40e >/dev/null 2>&1
exec /sbin/modprobe i40e >/dev/null 2>&1
----
==== x710 specific instructions
For x710 (i40e driver), we needed to download latest kernel driver. On all distributions we were using, existing driver was not new enough. +
To make the system use your new compiled driver with the correct parameters: +
Copy the .ko file to /lib/modules/Your kernel version as seen by uname -r/kernel/drivers/net/ethernet/intel/i40e/i40e.ko +
==== 82599 specific instructions
In order to make VF interfaces work correctly, we had to increase mtu on related PF interfaces. +
For example, if you run with max_vfs=1,1 (one VF per PF), you will have something like this:
[source,bash]
----
sudo ./dpdk_nic_bind.py -s
Network devices using DPDK-compatible driver
============================================
0000:03:10.0 '82599 Ethernet Controller Virtual Function' drv=igb_uio unused=
0000:03:10.1 '82599 Ethernet Controller Virtual Function' drv=igb_uio unused=
Network devices using kernel driver
===================================
0000:01:00.0 'I350 Gigabit Network Connection' if=eth0 drv=igb unused=igb_uio *Active*
0000:03:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth2 drv=ixgbe unused=igb_uio
0000:03:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=eth3 drv=ixgbe unused=igb_uio
----
In order to work with 0000:03:10.0 and 0000:03:10.1, you will have to run the following +
[source,bash]
----
sudo ifconfig eth3 up mtu 9000
sudo ifconfig eth2 up mtu 9000
----
TRex stateful performance::
Using the following command, running on x710 card with VF driver, we can see that TRex can reach 30GBps, using only one core. We can also see that the average latency is around 20 usec, which is pretty much the same value we get on loopback ports with x710 physical function without VF.
$sudo ./t-rex-64 -f cap2/http_simple.yaml -m 40000 -l 100 -c 1 -p
[source,python]
----
$sudo ./t-rex-64 -f cap2/http_simple.yaml -m 40000 -l 100 -c 1 -p
-Per port stats table
ports | 0 | 1
-----------------------------------------------------------------------------------------
opackets | 106573954 | 107433792
obytes | 99570878833 | 100374254956
ipackets | 107413075 | 106594490
ibytes | 100354899813 | 99590070585
ierrors | 1038 | 1027
oerrors | 0 | 0
Tx Bw | 15.33 Gbps | 15.45 Gbps
-Global stats enabled
Cpu Utilization : 91.5 % 67.3 Gb/core
Platform_factor : 1.0
Total-Tx : 30.79 Gbps
Total-Rx : 30.79 Gbps
Total-PPS : 4.12 Mpps
Total-CPS : 111.32 Kcps
Expected-PPS : 4.11 Mpps
Expected-CPS : 111.04 Kcps
Expected-BPS : 30.71 Gbps
Active-flows : 14651 Clients : 255 Socket-util : 0.0912 %
Open-flows : 5795073 Servers : 65535 Socket : 14652 Socket/Clients : 57.5
drop-rate : 0.00 bps
current time : 53.9 sec
test duration : 3546.1 sec
-Latency stats enabled
Cpu Utilization : 23.4 %
if| tx_ok , rx_ok , rx check ,error, latency (usec) , Jitter
| , , , , average , max , (usec)
-------------------------------------------------------------------------------
0 | 5233, 5233, 0, 0, 19 , 580, 5 | 37 37 37 4
1 | 5233, 5233, 0, 0, 22 , 577, 5 | 38 40 39 3
----
TRex stateless performance::
[source,python]
----
$sudo ./t-rex-64 -i -c 1
trex>portattr
Port Status
port | 0 | 1
-------------------------------------------------------------
driver | net_i40e_vf | net_i40e_vf
description | XL710/X710 Virtual | XL710/X710 Virtual
With the console command:
start -f stl/imix.py -m 8mpps --force --port 0
We can see, that we can reach 8M packet per second, which in this case is around 24.28 Gbit/second.
Global Statistics
connection : localhost, Port 4501 total_tx_L2 : 24.28 Gb/sec
version : v2.15 total_tx_L1 : 25.55 Gb/sec
cpu_util. : 80.6% @ 1 cores (1 per port) total_rx : 24.28 Gb/sec
rx_cpu_util. : 66.8% total_pps : 7.99 Mpkt/sec
async_util. : 0.18% / 1.84 KB/sec drop_rate : 0.00 b/sec
queue_full : 3,467 pkts
Port Statistics
port | 0 | 1 | total
----------------------------------------------------------------------
owner | ibarnea | ibarnea |
link | UP | UP |
state | TRANSMITTING | IDLE |
speed | 40 Gb/s | 40 Gb/s |
CPU util. | 80.6% | 0.0% |
-- | | |
Tx bps L2 | 24.28 Gbps | 0.00 bps | 24.28 Gbps
Tx bps L1 | 25.55 Gbps | 0 bps | 25.55 Gbps
Tx pps | 7.99 Mpps | 0.00 pps | 7.99 Mpps
Line Util. | 63.89 % | 0.00 % |
--- | | |
Rx bps | 0.00 bps | 24.28 Gbps | 24.28 Gbps
Rx pps | 0.00 pps | 7.99 Mpps | 7.99 Mpps
---- | | |
opackets | 658532501 | 0 | 658532501
ipackets | 0 | 658612569 | 658612569
obytes | 250039721918 | 0 | 250039721918
ibytes | 0 | 250070124150 | 250070124150
tx-bytes | 250.04 GB | 0 B | 250.04 GB
rx-bytes | 0 B | 250.07 GB | 250.07 GB
tx-pkts | 658.53 Mpkts | 0 pkts | 658.53 Mpkts
rx-pkts | 0 pkts | 658.61 Mpkts | 658.61 Mpkts
----- | | |
oerrors | 0 | 0 | 0
ierrors | 0 | 15,539 | 15,539
----
==== Performance
See the performance tests we did link:trex_vm_bench.html[here]
=== Mellanox ConnectX-4 support
anchor:connectx_support[]
Mellanox ConnectX-4 adapter family supports 100/56/40/25/10 Gb/s Ethernet speeds.
Its DPDK support is a bit different from Intel DPDK support, more information can be found link:http://dpdk.org/doc/guides/nics/mlx5.html[here].
Intel NICs do not require additional kernel drivers (except for igb_uio which is already supported in most distributions). ConnectX-4 works on top of Infiniband API (verbs) and requires special kernel modules/user space libs.
This means that it is required to install OFED package to be able to work with this NIC.
Installing the full OFED package is the simplest way to make it work (trying to install part of the package can work too but didn't work for us).
The advantage of this model is that you can control it using standard Linux tools (ethtool and ifconfig will work).
The disadvantage is the OFED dependency.
==== Installation
==== Install Linux
We tested the following distro with TRex and OFED. Others might work too.
* CentOS 7.2
Following distro was tested and did *not* work for us.
* Fedora 21 (3.17.4-301.fc21.x86_64)
* Ubuntu 14.04.3 LTS (GNU/Linux 3.19.0-25-generic x86_64) -- crash when RSS was enabled link:https://trex-tgn.cisco.com/youtrack/issue/trex-294[MLX RSS issue]
==== Install OFED
Information was taken from link:http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers[Install OFED]
* Download 3.4-2/4.0 OFED tar for your distro
[IMPORTANT]
=====================================
The version must be *MLNX_OFED_LINUX-3.4-2* or higher (4.0.x)
=====================================
[IMPORTANT]
=====================================
Make sure you have an internet connection without firewalls for HTTPS/HTTP - required by yum/apt-get
=====================================
.Verify md5
[source,bash]
----
$md5sum md5sum MLNX_OFED_LINUX-3.4-2.0.0.0-rhel7.2-x86_64.tgz
58b9fb369d7c62cedbc855661a89a9fd MLNX_OFED_LINUX-3.4-2.0.0.0-rhel7.2-x86_64.tgz
----
.Open the tar
[source,bash]
----
$tar -xvzf MLNX_OFED_LINUX-3.4-2.0.0.0-rhel7.2-x86_64.tgz
$cd MLNX_OFED_LINUX-3.4-2.0.0.0-rhel7.2-x86_64
----
.Run Install script
[source,bash]
----
$sudo ./mlnxofedinstall
Log: /tmp/ofed.build.log
Logs dir: /tmp/MLNX_OFED_LINUX.10406.logs
Below is the list of MLNX_OFED_LINUX packages that you have chosen
(some may have been added by the installer due to package dependencies):
ofed-scripts
mlnx-ofed-kernel-utils
mlnx-ofed-kernel-dkms
iser-dkms
srp-dkms
mlnx-sdp-dkms
mlnx-rds-dkms
mlnx-nfsrdma-dkms
libibverbs1
ibverbs-utils
libibverbs-dev
libibverbs1-dbg
libmlx4-1
libmlx4-dev
libmlx4-1-dbg
libmlx5-1
libmlx5-dev
libmlx5-1-dbg
libibumad
libibumad-static
libibumad-devel
ibacm
ibacm-dev
librdmacm1
librdmacm-utils
librdmacm-dev
mstflint
ibdump
libibmad
libibmad-static
libibmad-devel
libopensm
opensm
opensm-doc
libopensm-devel
infiniband-diags
infiniband-diags-compat
mft
kernel-mft-dkms
libibcm1
libibcm-dev
perftest
ibutils2
libibdm1
ibutils
cc-mgr
ar-mgr
dump-pr
ibsim
ibsim-doc
knem-dkms
mxm
fca
sharp
hcoll
openmpi
mpitests
knem
rds-tools
libdapl2
dapl2-utils
libdapl-dev
srptools
mlnx-ethtool
libsdp1
libsdp-dev
sdpnetstat
This program will install the MLNX_OFED_LINUX package on your machine.
Note that all other Mellanox, OEM, OFED, or Distribution IB packages will be removed.
Do you want to continue?[y/N]:y
Checking SW Requirements...
One or more required packages for installing MLNX_OFED_LINUX are missing.
Attempting to install the following missing packages:
autotools-dev tcl debhelper dkms tk8.4 libgfortran3 graphviz chrpath automake dpatch flex bison autoconf quilt m4 tcl8.4 libltdl-dev pkg-config pytho
bxml2 tk swig gfortran libnl1
..
Removing old packages...
Installing new packages
Installing ofed-scripts-3.4...
Installing mlnx-ofed-kernel-utils-3.4...
Installing mlnx-ofed-kernel-dkms-3.4...
Removing old packages...
Installing new packages
Installing ofed-scripts-3.4...
Installing mlnx-ofed-kernel-utils-3.4...
Installing mlnx-ofed-kernel-dkms-3.4...
Installing iser-dkms-1.8.1...
Installing srp-dkms-1.6.1...
Installing mlnx-sdp-dkms-3.4...
Installing mlnx-rds-dkms-3.4...
Installing mlnx-nfsrdma-dkms-3.4...
Installing libibverbs1-1.2.1mlnx1...
Installing ibverbs-utils-1.2.1mlnx1...
Installing libibverbs-dev-1.2.1mlnx1...
Installing libibverbs1-dbg-1.2.1mlnx1...
Installing libmlx4-1-1.2.1mlnx1...
Installing libmlx4-dev-1.2.1mlnx1...
Installing libmlx4-1-dbg-1.2.1mlnx1...
Installing libmlx5-1-1.2.1mlnx1...
Installing libmlx5-dev-1.2.1mlnx1...
Installing libmlx5-1-dbg-1.2.1mlnx1...
Installing libibumad-1.3.10.2.MLNX20150406.966500d...
Installing libibumad-static-1.3.10.2.MLNX20150406.966500d...
Installing libibumad-devel-1.3.10.2.MLNX20150406.966500d...
Installing ibacm-1.2.1mlnx1...
Installing ibacm-dev-1.2.1mlnx1...
Installing librdmacm1-1.1.0mlnx...
Installing librdmacm-utils-1.1.0mlnx...
Installing librdmacm-dev-1.1.0mlnx...
Installing mstflint-4.5.0...
Installing ibdump-4.0.0...
Installing libibmad-1.3.12.MLNX20160814.4f078cc...
Installing libibmad-static-1.3.12.MLNX20160814.4f078cc...
Installing libibmad-devel-1.3.12.MLNX20160814.4f078cc...
Installing libopensm-4.8.0.MLNX20160906.32a95b6...
Installing opensm-4.8.0.MLNX20160906.32a95b6...
Installing opensm-doc-4.8.0.MLNX20160906.32a95b6...
Installing libopensm-devel-4.8.0.MLNX20160906.32a95b6...
Installing infiniband-diags-1.6.6.MLNX20160814.999c7b2...
Installing infiniband-diags-compat-1.6.6.MLNX20160814.999c7b2...
Installing mft-4.5.0...
Installing kernel-mft-dkms-4.5.0...
Installing libibcm1-1.0.5mlnx2...
Installing libibcm-dev-1.0.5mlnx2...
Installing perftest-3.0...
Installing ibutils2-2.1.1...
Installing libibdm1-1.5.7.1...
Installing ibutils-1.5.7.1...
Installing cc-mgr-1.0...
Installing ar-mgr-1.0...
Installing dump-pr-1.0...
Installing ibsim-0.6...
Installing ibsim-doc-0.6...
Installing knem-dkms-1.1.2.90mlnx1...
Installing mxm-3.5.220c57f...
Installing fca-2.5.2431...
Installing sharp-1.1.1.MLNX20160915.8763a35...
Installing hcoll-3.6.1228...
Installing openmpi-1.10.5a1...
Installing mpitests-3.2.18...
Installing knem-1.1.2.90mlnx1...
Installing rds-tools-2.0.7...
Installing libdapl2-2.1.9mlnx...
Installing dapl2-utils-2.1.9mlnx...
Installing libdapl-dev-2.1.9mlnx...
Installing srptools-1.0.3...
Installing mlnx-ethtool-4.2...
Installing libsdp1-1.1.108...
Installing libsdp-dev-1.1.108...
Installing sdpnetstat-1.60...
Selecting previously unselected package mlnx-fw-updater.
(Reading database ... 70592 files and directories currently installed.)
Preparing to unpack .../mlnx-fw-updater_3.4-1.0.0.0_amd64.deb ...
Unpacking mlnx-fw-updater (3.4-1.0.0.0) ...
Setting up mlnx-fw-updater (3.4-1.0.0.0) ...
Added RUN_FW_UPDATER_ONBOOT=no to /etc/infiniband/openib.conf
Attempting to perform Firmware update...
Querying Mellanox devices firmware ...
Device #1:
Device Type: ConnectX4
Part Number: MCX416A-CCA_Ax
Description: ConnectX-4 EN network interface card; 100GbE dual-port QSFP28; PCIe3.0 x16; ROHS R6
PSID: MT_2150110033
PCI Device Name: 03:00.0
Base GUID: 248a07030014fc60
Base MAC: 0000248a0714fc60
Versions: Current Available
FW 12.16.1006 12.17.1010
PXE 3.4.0812 3.4.0903
Status: Update required
Found 1 device(s) requiring firmware update...
Device #1: Updating FW ... Done
Restart needed for updates to take effect.
Log File: /tmp/MLNX_OFED_LINUX.16084.logs/fw_update.log
Please reboot your system for the changes to take effect.
Device (03:00.0):
03:00.0 Ethernet controller: Mellanox Technologies MT27620 Family
Link Width: x16
PCI Link Speed: 8GT/s
Device (03:00.1):
03:00.1 Ethernet controller: Mellanox Technologies MT27620 Family
Link Width: x16
PCI Link Speed: 8GT/s
Installation passed successfully
To load the new driver, run:
/etc/init.d/openibd restart
-----
.Reboot
[source,bash]
----
$sudo reboot
----
.After reboot
[source,bash]
----
$ibv_devinfo
hca_id: mlx5_1
transport: InfiniBand (0)
fw_ver: 12.17.1010 << 12.17.00
node_guid: 248a:0703:0014:fc61
sys_image_guid: 248a:0703:0014:fc60
vendor_id: 0x02c9
vendor_part_id: 4115
hw_ver: 0x0
board_id: MT_2150110033
phys_port_cnt: 1
Device ports:
port: 1
state: PORT_DOWN (1)
max_mtu: 4096 (5)
active_mtu: 1024 (3)
sm_lid: 0
port_lid: 0
port_lmc: 0x00
link_layer: Ethernet
hca_id: mlx5_0
transport: InfiniBand (0)
fw_ver: 12.17.1010
node_guid: 248a:0703:0014:fc60
sys_image_guid: 248a:0703:0014:fc60
vendor_id: 0x02c9
vendor_part_id: 4115
hw_ver: 0x0
board_id: MT_2150110033
phys_port_cnt: 1
Device ports:
port: 1
state: PORT_DOWN (1)
max_mtu: 4096 (5)
active_mtu: 1024 (3)
sm_lid: 0
port_lid: 0
port_lmc: 0x00
link_layer: Ethernet
----
.ibdev2netdev
[source,bash]
-----
$ibdev2netdev
mlx5_0 port 1 ==> eth6 (Down)
mlx5_1 port 1 ==> eth7 (Down)
-----
.find the ports
[source,bash]
-----
$sudo ./dpdk_setup_ports.py -t
+----+------+---------++---------------------------------------------
| ID | NUMA | PCI || Name | Driver |
+====+======+=========++===============================+===========+=
| 0 | 0 | 06:00.0 || VIC Ethernet NIC | enic |
+----+------+---------++-------------------------------+-----------+-
| 1 | 0 | 07:00.0 || VIC Ethernet NIC | enic |
+----+------+---------++-------------------------------+-----------+-
| 2 | 0 | 0a:00.0 || 82599ES 10-Gigabit SFI/SFP+ Ne| ixgbe |
+----+------+---------++-------------------------------+-----------+-
| 3 | 0 | 0a:00.1 || 82599ES 10-Gigabit SFI/SFP+ Ne| ixgbe |
+----+------+---------++-------------------------------+-----------+-
| 4 | 0 | 0d:00.0 || Device 15d0 | |
+----+------+---------++-------------------------------+-----------+-
| 5 | 0 | 10:00.0 || I350 Gigabit Network Connectio| igb |
+----+------+---------++-------------------------------+-----------+-
| 6 | 0 | 10:00.1 || I350 Gigabit Network Connectio| igb |
+----+------+---------++-------------------------------+-----------+-
| 7 | 1 | 85:00.0 || 82599ES 10-Gigabit SFI/SFP+ Ne| ixgbe |
+----+------+---------++-------------------------------+-----------+-
| 8 | 1 | 85:00.1 || 82599ES 10-Gigabit SFI/SFP+ Ne| ixgbe |
+----+------+---------++-------------------------------+-----------+-
| 9 | 1 | 87:00.0 || MT27700 Family [ConnectX-4] | mlx5_core | #<1>
+----+------+---------++-------------------------------+-----------+-
| 10 | 1 | 87:00.1 || MT27700 Family [ConnectX-4] | mlx5_core | #<2>
+----+------+---------++---------------------------------------------
-----
<1> ConnectX-4 port 0
<2> ConnectX-4 port 1
.Config file example
[source,bash]
-----
### Config file generated by dpdk_setup_ports.py ###
- port_limit: 2
version: 2
interfaces: ['87:00.0', '87:00.1']
port_info:
- ip: 1.1.1.1
default_gw: 2.2.2.2
- ip: 2.2.2.2
default_gw: 1.1.1.1
platform:
master_thread_id: 0
latency_thread_id: 1
dual_if:
- socket: 1
threads: [8,9,10,11,12,13,14,15,24,25,26,27,28,29,30,31]
-----
==== TRex specific implementation details
TRex uses flow director filter to steer specific packets to specific queues.
To support that, we change IPv4.TOS/Ipv6.TC LSB to *1* for packets we want to handle by software (Other packets will be dropped). So latency packets will have this bit turned on (This is true for all NIC types, not only for ConnectX-4).
This means taht if the DUT for some reason clears this bit (change TOS LSB to 0, e.g. change it from 0x3 to 0x2 for example) some TRex features (latency measurement for example) will not work properly.
==== Which NIC to buy?
NIC with two ports will work better from performance prospective, so it is better to have MCX456A-ECAT(dual 100gb ports) and *not* the MCX455A-ECAT (single 100gb port).
==== Limitation/Issues
* Stateless mode ``per stream statistics'' feature is handled in software (No hardware support like in X710 card).
* link:https://trex-tgn.cisco.com/youtrack/issue/trex-261[Latency issue]
* link:https://trex-tgn.cisco.com/youtrack/issue/trex-262[Statful RX out of order]
* link:https://trex-tgn.cisco.com/youtrack/issue/trex-273[Fedora 21 & OFED 3.4.1]
==== Performance Cycles/Packet ConnectX-4 vs Intel XL710
For TRex version v2.11, these are the comparison results between XL710 and ConnectX-4 for various scenarios.
.Stateless MPPS/Core [Preliminary]
image:images/xl710_vs_mlx5_64b.png[title="Stateless 64B"]
.Stateless Gb/Core [Preliminary]
image:images/xl710_vs_mlx5_var_size.png[title="Stateless variable size packet"]
*Comments*::
1. MLX5 can reach ~50MPPS while XL710 is limited to 35MPPS. (With potential future fix it will be ~65MPPS)
2. For Stateless/Stateful 256B profiles, ConnectX-4 uses half of the CPU cycles per packet. ConnectX-4 probably can handle in a better way chained mbufs (scatter gather).
3. In the average stateful scenario, ConnectX-4 is the same as XL710.
4. For Stateless 64B/IMIX profiles, ConnectX-4 uses 50-90% more CPU cycles per packet (it is actually even more because there is the TRex scheduler overhead) - it means that in worst case scenario, you will need x2 CPU for the same total MPPS.
[NOTE]
=====================================
There is a task to automate the production of thess reports
=====================================
==== Troubleshooting
* Before running TRex make sure the commands `ibv_devinfo` and `ibdev2netdev` present the NICS
* `ifconfig` should work too, you need to be able to ping from those ports
* run TRex server with '-v 7' for example `$sudo ./t-rex-64 -i -v 7`
==== Limitations/Issues
* The order of the mlx5 PCI addrees in /etc/trex_cfg.yaml should be in the same order reported by `./dpdk_setup_ports.py` tool see link:https://groups.google.com/forum/#!searchin/trex-tgn/unable$20to$20run$20rx-check$20with$204$20port%7Csort:relevance/trex-tgn/DsORbw3AbaU/IT-KLcZbDgAJ[issue_thread] and link:https://trex-tgn.cisco.com/youtrack/issue/trex-295[trex-295] else there would be reported this error.
.Will work
[source,bash]
----
- version : 2
interfaces : ["03:00.0","03:00.1"]
port_limit : 2
----
.Will not work
[source,bash]
----
- version : 2
interfaces : ["03:00.1","03:00.0"]
port_limit : 2
----
.The error
[source,bash]
----
PMD: net_mlx5: 0x7ff2dcfcb2c0: flow director mode 0 not supported
EAL: Error - exiting with code: 1
Cause: rte_eth_dev_filter_ctrl: err=-22, port=2
----
==== Build with native OFED
In some case there is a need to build the dpdk-mlx5 with different OFED (not just 4.0 maybe newer)
to do so run this on native machine
[source,bash]
----
[csi-trex-07]> ./b configure
Setting top to : /auto/srg-sce-swinfra-usr/emb/users/hhaim/work/depot/asr1k/emb/private/hhaim/bp_sim_git/trex-core
Setting out to : /auto/srg-sce-swinfra-usr/emb/users/hhaim/work/depot/asr1k/emb/private/hhaim/bp_sim_git/trex-core/linux_dpdk/build_dpdk
Checking for program 'g++, c++' : /bin/g++
Checking for program 'ar' : /bin/ar
Checking for program 'gcc, cc' : /bin/gcc
Checking for program 'ar' : /bin/ar
Checking for program 'ldd' : /bin/ldd
Checking for library z : yes
Checking for OFED : Found needed version 4.0 #1
Checking for library ibverbs : yes
'configure' finished successfully (1.826s)
----
<1> make sure it was identify
[source,python]
----
index fba7540..a55fe6b 100755
--- a/linux_dpdk/ws_main.py
+++ b/linux_dpdk/ws_main.py
@@ -143,8 +143,11 @@ def missing_pkg_msg(fedora, ubuntu):
def check_ofed(ctx):
ctx.start_msg('Checking for OFED')
ofed_info='/usr/bin/ofed_info'
- ofed_ver= '-3.4-'
- ofed_ver_show= 'v3.4'
+
+ ofed_ver_re = re.compile('.*[-](\d)[.](\d)[-].*')
+
+ ofed_ver= 40 <1>
+ ofed_ver_show= '4.0'
--- a/scripts/dpdk_setup_ports.py
+++ b/scripts/dpdk_setup_ports.py
@@ -366,8 +366,8 @@ Other network devices
ofed_ver_re = re.compile('.*[-](\d)[.](\d)[-].*')
- ofed_ver= 34
- ofed_ver_show= '3.4-1'
+ ofed_ver= 40 <2>
+ ofed_ver_show= '4.0'
----
<1> change to new version
<2> change to new version
=== Cisco VIC support
anchor:ciscovic_support[]
* Supported from TRex version v2.12
* Only 1300 series Cisco adapter supported
* Must have VIC firmware version 2.0(13) for UCS C-series servers. Will be GA in Febuary 2017.
* Must have VIC firmware version 3.1(2) for blade servers (which supports more filtering capabilities).
* The feature can be enabled via Cisco CIMC or USCM with the 'advanced filters' radio button. When enabled, these additional flow director modes are available:
RTE_ETH_FLOW_NONFRAG_IPV4_OTHER
RTE_ETH_FLOW_NONFRAG_IPV4_SCTP
RTE_ETH_FLOW_NONFRAG_IPV6_UDP
RTE_ETH_FLOW_NONFRAG_IPV6_TCP
RTE_ETH_FLOW_NONFRAG_IPV6_SCTP
RTE_ETH_FLOW_NONFRAG_IPV6_OTHER
==== vNIC Configuration Parameters
*Number of Queues*::
The maximum number of receive queues (RQs), work queues (WQs) and completion queues (CQs) are configurable on a per vNIC basis through the Cisco UCS Manager (CIMC or UCSM).
These values should be configured as follows:
* The number of WQs should be greater or equal to the number of threads (-c value) plus 1
* The number of RQs should be greater than 5
* The number of CQs should set to WQs + RQs
* Unless there is a lack of resources due to creating many vNICs, it is recommended that the WQ and RQ sizes be set to the *maximum*.
*Advanced filters*::
advanced filter should be enabled
*MTU*::
set the MTU to maximum 9000-9190 (Depends on the FW version)
more information could be found here link:http://www.dpdk.org/doc/guides/nics/enic.html?highlight=enic[enic DPDK]
image:images/UCS-B-adapter_policy_1.jpg[title="vic configuration",align="center",width=800]
image:images/UCS-B-adapter_policy_2.jpg[title="vic configuration",align="center",width=800]
In case it is not configured correctly, this error will be seen
.VIC error in case of wrong RQ/WQ
[source,bash]
----
Starting TRex v2.15 please wait ...
no client generator pool configured, using default pool
no server generator pool configured, using default pool
zmq publisher at: tcp://*:4500
Number of ports found: 2
set driver name rte_enic_pmd
EAL: Error - exiting with code: 1
Cause: Cannot configure device: err=-22, port=0 #<1>
----
<1>There is not enough queues
running it with verbose mode with CLI `-v 7`
[source,bash]
----
$sudo ./t-rex-64 -f cap2/dns.yaml -c 1 -m 1 -d 10 -l 1000 -v 7
----
will give move info
[source,bash]
----
EAL: probe driver: 1137:43 rte_enic_pmd
PMD: rte_enic_pmd: Advanced Filters available
PMD: rte_enic_pmd: vNIC MAC addr 00:25:b5:99:00:4c wq/rq 256/512 mtu 1500, max mtu:9190
PMD: rte_enic_pmd: vNIC csum tx/rx yes/yes rss yes intr mode any type min
PMD: rte_enic_pmd: vNIC resources avail: wq 2 rq 2 cq 4 intr 6 #<1>
EAL: PCI device 0000:0f:00.0 on NUMA socket 0
EAL: probe driver: 1137:43 rte_enic_pmd
PMD: rte_enic_pmd: Advanced Filters available
PMD: rte_enic_pmd: vNIC MAC addr 00:25:b5:99:00:5c wq/rq 256/512 mtu 1500, max
----
<1> rq is 2 which mean 1 input queue which is less than minimum required by trex (rq should be at least 5)
==== Limitations/Issues
* Stateless mode ``per stream statistics'' feature is handled in software (No hardware support like in X710 card).
* link:https://trex-tgn.cisco.com/youtrack/issue/trex-272[QSFP+ issue]
* VLAN 0 Priority Tagging
If a vNIC is configured in TRUNK mode by the UCS manager, the adapter will priority tag egress packets according to 802.1Q if they were not already VLAN tagged by software. If the adapter is connected to a properly configured switch, there will be no unexpected behavior.
In test setups where an Ethernet port of a Cisco adapter in TRUNK mode is connected point-to-point to another adapter port or connected though a router instead of a switch, all ingress packets will be VLAN tagged. TRex can work with that see more link:http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/ucs-c-series-rack-servers/117637-technote-UCS-00.html[upstream VIC]
Upstream the VIC always tags packets with an 802.1p header.In downstream it is possible to remove the tag (not supported by TRex yet)
=== More active flows
From version v2.13 there is a new Stateful scheduler that works better in the case of high concurrent/active flows.
In case of EMIX 70% better performance was observed.
In this tutorial there are 14 DP cores & up to 8M flows.
There is a special config file to enlarge the number of flows. This tutorial present the difference in performance between the old scheduler and the new.
==== Setup details
[cols="1,5"]
|=================
| Server: | UCSC-C240-M4SX
| CPU: | 2 x Intel(R) Xeon(R) CPU E5-2667 v3 @ 3.20GHz
| RAM: | 65536 @ 2133 MHz
| NICs: | 2 x Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev 01)
| QSFP: | Cisco QSFP-H40G-AOC1M
| OS: | Fedora 18
| Switch: | Cisco Nexus 3172 Chassis, System version: 6.0(2)U5(2).
| TRex: | v2.13/v2.12 using 7 cores per dual interface.
|=================
==== Traffic profile
.cap2/cur_flow_single.yaml
[source,python]
----
- duration : 0.1
generator :
distribution : "seq"
clients_start : "16.0.0.1"
clients_end : "16.0.0.255"
servers_start : "48.0.0.1"
servers_end : "48.0.255.255"
clients_per_gb : 201
min_clients : 101
dual_port_mask : "1.0.0.0"
cap_info :
- name: cap2/udp_10_pkts.pcap <1>
cps : 100
ipg : 200
rtt : 200
w : 1
----
<1> One directional UDP flow with 10 packets of 64B
==== Config file command
./cfg/trex_08_5mflows.yaml
[source,python]
----
- port_limit: 4
version: 2
interfaces: ['05:00.0', '05:00.1', '84:00.0', '84:00.1']
port_info:
- ip: 1.1.1.1
default_gw: 2.2.2.2
- ip: 3.3.3.3
default_gw: 4.4.4.4
- ip: 4.4.4.4
default_gw: 3.3.3.3
- ip: 2.2.2.2
default_gw: 1.1.1.1
platform:
master_thread_id: 0
latency_thread_id: 15
dual_if:
- socket: 0
threads: [1,2,3,4,5,6,7]
- socket: 1
threads: [8,9,10,11,12,13,14]
memory :
dp_flows : 1048576 <1>
----
<1> add memory section with more flows
==== Traffic command
.command
[source,bash]
----
$sudo ./t-rex-64 -f cap2/cur_flow_single.yaml -m 30000 -c 7 -d 40 -l 1000 --active-flows 5000000 -p --cfg cfg/trex_08_5mflows.yaml
----
The number of active flows can be change using `--active-flows` CLI. in this example it is set to 5M flows
==== Script to get performance per active number of flows
[source,python]
----
def minimal_stateful_test(server,csv_file,a_active_flows):
trex_client = CTRexClient(server) <1>
trex_client.start_trex( <2>
c = 7,
m = 30000,
f = 'cap2/cur_flow_single.yaml',
d = 30,
l = 1000,
p=True,
cfg = "cfg/trex_08_5mflows.yaml",
active_flows=a_active_flows,
nc=True
)
result = trex_client.sample_to_run_finish() <3>
active_flows=result.get_value_list('trex-global.data.m_active_flows')
cpu_utl=result.get_value_list('trex-global.data.m_cpu_util')
pps=result.get_value_list('trex-global.data.m_tx_pps')
queue_full=result.get_value_list('trex-global.data.m_total_queue_full')
if queue_full[-1]>10000:
print("WARNING QUEU WAS FULL");
tuple=(active_flows[-5],cpu_utl[-5],pps[-5],queue_full[-1]) <4>
file_writer = csv.writer(test_file)
file_writer.writerow(tuple);
if __name__ == '__main__':
test_file = open('tw_2_layers.csv', 'wb');
parser = argparse.ArgumentParser(description="Example for TRex Stateful, assuming server daemon is running.")
parser.add_argument('-s', '--server',
dest='server',
help='Remote trex address',
default='127.0.0.1',
type = str)
args = parser.parse_args()
max_flows=8000000;
min_flows=100;
active_flow=min_flows;
num_point=10
factor=math.exp(math.log(max_flows/min_flows,math.e)/num_point);
for i in range(num_point+1):
print("=====================",i,math.floor(active_flow))
minimal_stateful_test(args.server,test_file,math.floor(active_flow))
active_flow=active_flow*factor
test_file.close();
----
<1> connect
<2> Start with different active_flows
<3> wait for the results
<4> get the results and save to csv file
This script iterate between 100 to 8M active flows and save the results to csv file.
==== The results v2.12 vs v2.14
.MPPS/core
image:images/tw1_0.png[title="results",align="center"]
.MPPS/core
image:images/tw0_0_chart.png[title="results",align="center",width=800]
* TW0 - v2.14 default configuration
* PQ - v2.12 default configuration
* To run the same script on v2.12 (that does not support `active_flows` directive) a patch was introduced.
*Observation*::
* TW works better (up to 250%) in case of 25-100K flows
* TW scale better with active-flows
==== Tunning
let's add another modes called *TW1*, in this mode the scheduler is tune to have more buckets (more memory)
.TW1 cap2/cur_flow_single_tw_8.yaml
[source,python]
----
- duration : 0.1
generator :
distribution : "seq"
clients_start : "16.0.0.1"
clients_end : "16.0.0.255"
servers_start : "48.0.0.1"
servers_end : "48.0.255.255"
clients_per_gb : 201
min_clients : 101
dual_port_mask : "1.0.0.0"
tw :
buckets : 16384 <1>
levels : 2 <2>
bucket_time_usec : 20.0
cap_info :
- name: cap2/udp_10_pkts.pcap
cps : 100
ipg : 200
rtt : 200
w : 1
----
<1> more buckets
<2> less levels
in *TW2* mode we have the same template, duplicated one with short IPG and another one with high IPG
10% of the new flows will be with long IPG
.TW2 cap2/cur_flow.yaml
[source,python]
----
- duration : 0.1
generator :
distribution : "seq"
clients_start : "16.0.0.1"
clients_end : "16.0.0.255"
servers_start : "48.0.0.1"
servers_end : "48.0.255.255"
clients_per_gb : 201
min_clients : 101
dual_port_mask : "1.0.0.0"
tcp_aging : 0
udp_aging : 0
mac : [0x0,0x0,0x0,0x1,0x0,0x00]
#cap_ipg : true
cap_info :
- name: cap2/udp_10_pkts.pcap
cps : 10
ipg : 100000
rtt : 100000
w : 1
- name: cap2/udp_10_pkts.pcap
cps : 90
ipg : 2
rtt : 2
w : 1
----
==== Full results
* PQ - v2.12 default configuration
* TW0 - v2.14 default configuration
* TW1 - v2.14 more buckets 16K
* TW2 - v2.14 two templates
.MPPS/core Comparison
image:images/tw1.png[title="results",align="center",width=800]
.MPPS/core
image:images/tw1_tbl.png[title="results",align="center"]
.Factor relative to v2.12 results
image:images/tw2.png[title="results",align="center",width=800]
.Extrapolation Total GbE per UCS with average packet size of 600B
image:images/tw3.png[title="results",align="center",width=800]
Observation:
* TW2 (two flows) almost does not have a performance impact
* TW1 (more buckets) improve the performance up to a point
* TW is general is better than PQ
|