<- | UDPmon Tests | | | ^ | Table of Contents |
There were also TCP throughput bulk data transfer tests executed between Geneva and Chicago. To be able to compare the results with the previously obtained Netherlight results tests with two host pairs were executed, but there were also tests with five host pairs performed. All tests were executed bi-directional between Geneva and Chicago. For the two host pair tests streams between w05gva-ge-0 <-> w05chi-ge-1 and w06gva-ge-0 <-> w06chi-ge-1. The streams for the five host pair test were formed correspondingly with the exception that the hosts w04gva and w04chi were not participating in the tests due to the unavailability of host w04chi.
In both setups the TCP throughput has been measured as a function of
the total # streams and the overall sum of the TCP window. The
Iperf tool has been used as
traffic generator because it can execute multiple streams light-weighted, using
the pthread
system calls. For each single measurement a test time
of 60 s has been used. That is sufficient to deal with the combination of
slow startup and the long round-trip times of the link (about 100 ms).
Using two host pairs the following results were obtained: displays the total TCP throughput value, summed over all streams, as a function of the total TCP window size, again summed over all streams, and the # streams for the direction Geneva -> Chicago. In the equivalent results for the reverse direction are shown. In the average TCP throughput per stream is presented as a function of the TCP window size per stream for the direction Geneva -> Chicago. The results for each # streams are represented by a separate plot trace. shows these results for the reverse direction.
In the following figures the equivalent results are presented as listed in the previous paragraph, but here with five host pairs. shows again the total throughput values as function of the total window size and # streams for the direction Geneva -> Chicago, while gives the results for the reverse direction. also presents the average throughput values for the individual # streams as function of the window size per stream for the direction Geneva -> Chicago and displays these results for the reverse direction.
. | Total TCP throughput, summed over all streams, as a function of the total TCP throughput, also summed over all streams, and the # streams for the direction Geneva -> Chicago. Two host pairs were used in the tests. |
. | Total TCP throughput, summed over all streams, as a function of the total TCP throughput, also summed over all streams, and the # streams for the direction Chicago -> Geneva. Two host pairs were used in the tests. |
. | Average TCP throughput per stream as a function of TCP window size for the direction Geneva -> Chicago. The results for each # streams are represented by a separate plot trace. The tests were run with two host pairs. |
. | Average TCP throughput per stream as a function of TCP window size for the direction Chicago -> Geneva. The results for each # streams are represented by a separate plot trace. The tests were run with two host pairs. |
. | Total TCP throughput, summed over all streams, as a function of the total TCP throughput, also summed over all streams, and the # streams for the direction Geneva -> Chicago. Five host pairs were used in the tests. |
. | Total TCP throughput, summed over all streams, as a function of the total TCP throughput, also summed over all streams, and the # streams for the direction Chicago -> Geneva. Five host pairs were used in the tests. |
. | Average TCP throughput per stream as a function of TCP window size for the direction Geneva -> Chicago. The results for each # streams are represented by a separate plot trace. The tests were run with five host pairs. |
. | Average TCP throughput per stream as a function of TCP window size for the direction Chicago -> Geneva. The results for each # streams are represented by a separate plot trace. The tests were run with five host pairs. |
From the the following conclusions can be drawn:
Also UDP bulk data transfer tests were executed between Geneva and Chicago in both directions. The same two host pairs as for the TCP tests were used. Also three host pair tests were executed, where streams between hosts w02gva-ge-0 and w02chi-ge-1 had been add to the streams used in the two host pair tests. Also here the Iperf application had been used to generate the UDP traffic.
The shaped UDP bandwidth send per stream had been varied until and
including 1000 Mbit/s with a step size of 12 Mbit/s. At each source
host one to four streams had been started to the corresponding destination host.
Again the pthread
system library had been used to start the
multiple streams. In these tests the # packets lost were measured as a
function of the total shaped UDP bandwidth and the # streams for
both directions Geneva <-> Chicago.
From these tests the following results were obtained:
pthread
library or alternatively multiple processes to start
multiple streams showed that:
pthread
library had been used. This implies that the
out-of-order packets, mentioned before, were probably not induced by the
network.
pthread
library, no out-of-order packets and bandwidth reporting errors had been
found. Until know that were also our experiences with the
Iperf tool.
pthread
library no out-of-order packets and bandwidth reporting errors had been
found. The reason probably is that each stream process is completely
handled by one processor, but this might be no guarantee that these
errors will never occur.
From the UDP bulk data transfer tests executed at the DataTAG link and the Netherlight Lambda the following can be concluded:
pthread
library had been
used, where host effects.
<- | UDPmon Tests | | | ^ | Table of Contents |