[Ndn-interest] Maximum Throughput in NDN

Davide Pesavento davidepesa at gmail.com
Tue May 21 04:43:25 PDT 2019


On Tue, May 21, 2019 at 12:45 AM Junxiao Shi
<shijunxiao at email.arizona.edu> wrote:
>
> Hi Davide & Klaus
>
>> > I have some recent numbers for ndncatchunks from April 2019. These were using NFD 0.6.5 and ndn-tools 0.6.3 on Ubuntu 16.04. Hardware is 2x Xeon E5-2640 and 32GB memory.
>>
>> The experiments should be repeated with ndn-tools 0.6.4, ndncatchunks uses a new CUBIC-like congestion control in that version.
>
>
> Yes I'll do that soon.
>
>> > Raw numbers are read from ndncatchunks directly. Reported numbers are average and stdev over 18 flows (6 flows per trial).
>>
>> As I already privately told Junxiao some time ago, I believe these numbers are rather inconclusive, but they do raise some eyebrows and should motivate further investigations into some peculiar (if not pathological) behaviors of NFD that should be fixed or improved. On the other hand, some results may need to be dismissed due to incorrect testing methodology.
>
>
> Can you elaborate on what is "incorrect testing methodology"?

I'm not sure, it can be a lot of things. I'm not saying it is
incorrect, just that further investigation is needed before drawing
any conclusions, and that some numbers may not be caused by NFD bugs
or missing features, but artifacts of the specific testing environment
or experiment setup.

>
>>
>> Btw, what do you mean "3 flows per trial"? If you start 6 prod/cons pairs, won't you get 6 concurrent flows?
>
>
> It's a typo. It should be "6 flows per trial".
>
>> > Goodput (Mbps)
>> > SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
>> > ----------|----------|------------|--------|-----------
>> > NFD-small | 63.88    | 0.27       | 61.27  | 2.52
>> > NFD-large | 65.68    | 1.55       | 85.17  | 1.91
>>
>> So the CS is only beneficial if you get a large percentage of hits...?
>> (The NFD-large/CS case has 100% hit ratio)
>
>
> Looking at PROD avg column, NFD-small is slower than NFD-large because forwarding thread needs to spend time evicting entries.
> NFD-small CS avg column is possibly 0% hit because packets at front of files are evicted when the second retrievals start.

Yes. So are you agreeing with me?

>
>> > Total number of lost/retransmitted segments
>> > SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
>> > ----------|----------|------------|--------|-----------
>> > NFD-small | 361.61   | 75.00      | 164.00 | 114.51
>> > NFD-large | 226.28   | 164.70     | 0.00   | 0.00
>>
>> The stddev is huge on these numbers, which means either you need more trials or something weird is going on.
>
>
> Yeah they are unstable. I don't have automated experiment runner yet so I can only do three manual trials.
>
>> > RTT avg (ms)
>> > SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
>> > ----------|----------|------------|--------|-----------
>> > NFD-small | 164.87   | 22.42      | 153.04 | 23.14
>> > NFD-large | 174.48   | 18.36      | 8.19   | 0.11
>>
>> With the exception of NFD-large/CS (where IIUC no packets should be forwarded to the producers), the RTT looks quite bad, considering the link delay is effectively zero. My guess is that there is too much buffering/queueing somewhere.
>
>
> Kernel has finite buffering. Boost has no buffering. StreamTransport has infinite buffering. #4499 strikes again.

That's possible, but again, more investigation is needed.

>
>> > If Data come from CS instead of producers, it's even faster.
>>
>> This is not true in general. In fact, it has been proven false more than once in the past.
>
>
> Because lookup is too slow?

Yes. With current NFD, this can easily happen if you have a slow CPU,
the CS hit ratio is low, and the upstream link delay is low (and link
bandwidth is not the bottleneck).

Davide


More information about the Ndn-interest mailing list