[Ndn-interest] Maximum Throughput in NDN

Junxiao Shi shijunxiao at email.arizona.edu
Mon May 20 08:16:54 PDT 2019


Dear folks

I have some recent numbers for ndncatchunks from April 2019. These were
using NFD 0.6.5 and ndn-tools 0.6.3 on Ubuntu 16.04. Hardware is 2x Xeon
E5-2640 and 32GB memory.
The experiment procedure is:

   1. Start NFD.
   2. Start six producers simultaneously, wait for initialization.
   3. Start six consumers simultaneously, wait for download completion;
   Data come from producers.
   4. Start six consumers simultaneously, wait for download completion;
   Data may come from CS.
   5. Three trials for each setup. Restart all programs between trials.

Compared setups:

   - NFD with small CS: 65536 entries, insufficient to store all chunks
   - NFD with large CS: 262144 entries, more than enough to store all chunks

Raw numbers are read from ndncatchunks directly. Reported numbers are
average and stdev over 18 flows (3 flows per trial).

*Goodput (Mbps)*
SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
----------|----------|------------|--------|-----------
NFD-small | 63.88    | 0.27       | 61.27  | 2.52
NFD-large | 65.68    | 1.55       | 85.17  | 1.91

*Total number of lost/retransmitted segments*
SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
----------|----------|------------|--------|-----------
NFD-small | 361.61   | 75.00      | 164.00 | 114.51
NFD-large | 226.28   | 164.70     | 0.00   | 0.00

*Window decreases*
SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
----------|----------|------------|--------|-----------
NFD-small | 1.94     | 0.24       | 1.56   | 0.51
NFD-large | 1.61     | 0.50       | 0.00   | 0.00

*RTT avg (ms)*
SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
----------|----------|------------|--------|-----------
NFD-small | 164.87   | 22.42      | 153.04 | 23.14
NFD-large | 174.48   | 18.36      | 8.19   | 0.11

Yours, Junxiao

On Mon, Feb 4, 2019 at 1:35 PM Chengyu Fan <chengy.fan at gmail.com> wrote:

>
>
>>
>> ndncatchunks file transfer tools achieved
>> <https://imsure.github.io/ndn-vegas-report.pdf> a 15~25Mbps goodput with
>> ndncatchunks file transfer tool, using his TCP Vegas congestion control
>> scheme.
>>
>
> @Junxiao Shi <shijunxiao at email.arizona.edu> , could you tell us more
> about the test, such as the topology, the machine
> specification, nfd version, and CS is used or not?
>

This is a report I found online. I do not know details. The related GitHub
repositories were last updated in Dec 2017.


> My recent test uses ndncatchunks running on nfd 0.6.4 (CS size was set to
> 0) on my local laptop is around 400Mbps.
>

Zero CS capacity doesn't make sense to me. With CS enabled, I got
comparable numbers: 383~394Mbps total for six simultaneous flows. If Data
come from CS instead of producers, it's even faster.


> NFD's forwarding performance was benchmarked
>> <https://redmine.named-data.net/issues/3564#note-24> in 2016 to be 6.8K
>> Interest-Data exchanges (tested with 10-byte segments, but if it were
>> 1300-byte segments, the goodput is 70Mbps); no newer benchmark reports are
>> available.
>>
>
> The NFD benchmark was done for version 0.4.2.
> I remember there were some improvements after the benchmark, and the
> results should be better than 6.8K Interest-Data exchanges.
>

Maybe, but there's no published benchmark results. I would say, the
ndn-traffic-generator benchmarks are less relevant than ndncatchunks
benchmarks, so there's no need to redo those.

>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.lists.cs.ucla.edu/pipermail/ndn-interest/attachments/20190520/d6b2b721/attachment.html>


More information about the Ndn-interest mailing list