[Ndn-interest] Maximum Throughput in NDN

Davide Pesavento davidepesa at gmail.com
Mon May 20 14:37:05 PDT 2019


On Mon, May 20, 2019 at 5:17 PM Junxiao Shi
<shijunxiao at email.arizona.edu> wrote:
>
> Dear folks
>
> I have some recent numbers for ndncatchunks from April 2019. These were using NFD 0.6.5 and ndn-tools 0.6.3 on Ubuntu 16.04. Hardware is 2x Xeon E5-2640 and 32GB memory.

The experiments should be repeated with ndn-tools 0.6.4, ndncatchunks
uses a new CUBIC-like congestion control in that version.

> The experiment procedure is:
>
> Start NFD.
> Start six producers simultaneously, wait for initialization.
> Start six consumers simultaneously, wait for download completion; Data come from producers.
> Start six consumers simultaneously, wait for download completion; Data may come from CS.
> Three trials for each setup. Restart all programs between trials.
>
> Compared setups:
>
> NFD with small CS: 65536 entries, insufficient to store all chunks
> NFD with large CS: 262144 entries, more than enough to store all chunks
>
> Raw numbers are read from ndncatchunks directly. Reported numbers are average and stdev over 18 flows (3 flows per trial).

As I already privately told Junxiao some time ago, I believe these
numbers are rather inconclusive, but they do raise some eyebrows and
should motivate further investigations into some peculiar (if not
pathological) behaviors of NFD that should be fixed or improved. On
the other hand, some results may need to be dismissed due to incorrect
testing methodology.

Btw, what do you mean "3 flows per trial"? If you start 6 prod/cons
pairs, won't you get 6 concurrent flows?

>
> Goodput (Mbps)
> SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
> ----------|----------|------------|--------|-----------
> NFD-small | 63.88    | 0.27       | 61.27  | 2.52
> NFD-large | 65.68    | 1.55       | 85.17  | 1.91

So the CS is only beneficial if you get a large percentage of hits...?
(The NFD-large/CS case has 100% hit ratio)

>
> Total number of lost/retransmitted segments
> SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
> ----------|----------|------------|--------|-----------
> NFD-small | 361.61   | 75.00      | 164.00 | 114.51
> NFD-large | 226.28   | 164.70     | 0.00   | 0.00

The stddev is huge on these numbers, which means either you need more
trials or something weird is going on.

>
> Window decreases
> SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
> ----------|----------|------------|--------|-----------
> NFD-small | 1.94     | 0.24       | 1.56   | 0.51
> NFD-large | 1.61     | 0.50       | 0.00   | 0.00
>
> RTT avg (ms)
> SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
> ----------|----------|------------|--------|-----------
> NFD-small | 164.87   | 22.42      | 153.04 | 23.14
> NFD-large | 174.48   | 18.36      | 8.19   | 0.11

With the exception of NFD-large/CS (where IIUC no packets should be
forwarded to the producers), the RTT looks quite bad, considering the
link delay is effectively zero. My guess is that there is too much
buffering/queueing somewhere.

> On Mon, Feb 4, 2019 at 1:35 PM Chengyu Fan <chengy.fan at gmail.com> wrote:
>>
>>>
>>> ndncatchunks file transfer tools achieved a 15~25Mbps goodput with ndncatchunks file transfer tool, using his TCP Vegas congestion control scheme.
>>
>> @Junxiao Shi , could you tell us more about the test, such as the topology, the machine specification, nfd version, and CS is used or not?
>
> This is a report I found online. I do not know details. The related GitHub repositories were last updated in Dec 2017.
>
>>
>> My recent test uses ndncatchunks running on nfd 0.6.4 (CS size was set to 0) on my local laptop is around 400Mbps.
>
> Zero CS capacity doesn't make sense to me. With CS enabled, I got comparable numbers: 383~394Mbps total for six simultaneous flows.

> If Data come from CS instead of producers, it's even faster.

This is not true in general. In fact, it has been proven false more
than once in the past.

Davide


More information about the Ndn-interest mailing list