• Home
  • History
  • Annotate
Name Date Size #Lines LOC

..03-May-2022-

debian/H07-May-2022-248185

doc/H03-May-2022-2,3251,796

gentoo/net-analyzer/flowgrind/H16-Jan-2021-7059

logstash/H03-May-2022-5949

man/H16-Jan-2021-661617

scripts/H16-Jan-2021-12386

src/H03-May-2022-12,3827,723

.gitignoreH A D16-Jan-2021408 4036

.travis.ymlH A D16-Jan-20212.7 KiB8063

.valgrind.suppH A D16-Jan-2021276 2017

AUTHORSH A D16-Jan-2021233 65

COPYINGH A D16-Jan-202134.3 KiB675553

INSTALL.mdH A D16-Jan-20214.2 KiB16498

Makefile.amH A D16-Jan-20213.5 KiB10060

NEWSH A D16-Jan-20214.7 KiB132111

README.mdH A D16-Jan-20214.4 KiB6337

configure.acH A D16-Jan-202114.1 KiB527464

README.md

1Flowgrind - TCP traffic generator
2=================================
3
4[![Build Status](https://travis-ci.org/flowgrind/flowgrind.svg?branch=next)](https://travis-ci.org/flowgrind/flowgrind)
5[![Coverity Scan Build Status](https://scan.coverity.com/projects/1663/badge.svg)](https://scan.coverity.com/projects/1663)
6
7Flowgrind is an advanced TCP traffic generator for testing and benchmarking **Linux**, **FreeBSD**, and **Mac OS X** TCP/IP stacks. In contrast to similar tools like iperf or netperf it features a distributed architecture, where throughput and other metrics are measured between arbitrary flowgrind server processes.
8
9* Website: [flowgrind.github.io](https://flowgrind.github.io)
10* Issues: [GitHub Issues](https://github.com/flowgrind/flowgrind/issues)
11* API documentation: [Doxygen](https://flowgrind.github.io/doxygen/index.html)
12
13
14What It Can Do?
15===============
16
17Flowgrind measures besides **goodput** (throughput), the application layer **interarrival time** (IAT) and **round-trip time** (RTT), **blockcount** and network **transactions/s**. Unlike most cross-platform testing tools, flowgrind can output some transport layer information, which are usually internal to the TCP/IP stack. For example, on Linux and FreeBSD this includes among others the kernel's estimation of the end-to-end RTT, the size of the TCP congestion window (CWND) and slow start threshold (SSTHRESH).
18
19Flowgrind has a distributed architecture. It is split into two components: the *flowgrind daemon* and the *flowgrind controller*. Using the controller, flows between any two systems running the flowgrind daemon can be setup (third party tests). At regular intervals during the test the controller collects and displays the measured results from the daemons. It can run multiple flows at once with the same or different settings and individually schedule every one. Test and control connection can optionally be diverted to different interfaces.
20
21The traffic generation itself is either bulk transfer, rate-limited, or sophisticated request/response tests. Flowgrind uses libpcap to automatically dump traffic for qualitative analysis.
22
23
24Building flowgrind
25==================
26
27Flowgrind builds cleanly on *Linux*, *FreeBSD*, and *Mac OS X*. Other operating systems are currently not planned to be supported. Flowgrind expects `libxmlrpc-c` and OSSP `uuid` to be available. Additionally, for the optional advanced traffic generation and automatic dump support `libgsl ` an `libpcap` should be installed.
28
29Flowgrind is built using GNU autotools on all supported platforms. You can build it using the following commands:
30
31	# cd flowgrind
32	# autoreconf -i
33	# ./configure
34	# make
35
36For more information see [INSTALL.md](INSTALL.md).
37
38
39Instructions to run a test
40==========================
41
421. Start `flowgrindd` on all machines that should be the endpoint of a flow.
432. Execute `flowgrind` on some machine (not necessarily one of the endpoints) with the host names of the endpoints passed through the -H option.
44
45Assume we have 4 machines, host0, host1, host2 and host3 and flowgrind has been installed on all of them. We want to measure flows from host1 to host2 and from host1 to host3 in parallel, controlled from host0. First, we start `flowgrindd` on host1 to host3. On host0 we execute:
46
47	# flowgrind -n 2 -F 0 -H s=host1,d=host2 -F 1 -H s=host1,d=host3
48
49In order to not influence the test connection with control traffic, flowgrind allows to setup the RPC control connection over a different interface. A typical scenario would be to test a WiFi connection and run the control traffic over a wired connection.
50
51Assume two machines running `flowgrindd`, each having two network adapters, one wired, one wireless. We run `flowgrind` on a machine that is connected by wire to the test machines. First machine has addresses 10.0.0.1 and 192.168.0.1, the other has addresses 10.0.0.2 and 192.168.0.1. So our host argument will be this:
52
53	# flowgrind -H s=192.168.0.1/10.0.0.1,d=192.168.0.2/10.0.0.2
54
55In words: test from 192.168.0.1 to 192.168.0.2 on the nodes identified by 10.0.0.1 and 10.0.0.2 respectively.
56
57
58See also
59========
60There are other popular TCP measurement tools you might look into, especially if you are mainly interested in fast unidirectional bulk transfer performance.
61 * [Iperf3](https://github.com/esnet/iperf) - fresh reimplementation of the original iperf
62 * [Netperf](http://www.netperf.org/netperf/) - network performance benchmark, also supports unix domain sockets
63