[Ndn-interest] Maximum Throughput in NDN

Klaus Schneider klaus at cs.arizona.edu
Mon May 20 11:21:13 PDT 2019


Some comments on this evaluation:

1. What's the file size? Since there's less than 2 window decreases on 
average, I'd suggest using larger files to get more representative results.

2. What's the main conclusion? That getting data from CS is about 40% 
fast than getting it from the local producer app?


Also, I got a higher throughput (490 Mbps) on weaker hardware (i7-6600U 
laptop CPU) and not using the CS, with just 1 simultaneous consumer: 
https://redmine.named-data.net/issues/4362#note-60

Much lower RTTs too. Any explanation for the difference?


See one more comment below.

On 5/20/19 8:16 AM, Junxiao Shi wrote:
> Dear folks
> 
> I have some recent numbers for ndncatchunks from April 2019. These were 
> using NFD 0.6.5 and ndn-tools 0.6.3 on Ubuntu 16.04. Hardware is 2x Xeon 
> E5-2640 and 32GB memory.
> The experiment procedure is:
> 
>  1. Start NFD.
>  2. Start six producers simultaneously, wait for initialization.
>  3. Start six consumers simultaneously, wait for download completion;
>     Data come from producers.
>  4. Start six consumers simultaneously, wait for download completion;
>     Data may come from CS.
>  5. Three trials for each setup. Restart all programs between trials.
> 
> Compared setups:
> 
>   * NFD with small CS: 65536 entries, insufficient to store all chunks
>   * NFD with large CS: 262144 entries, more than enough to store all chunks
> 
> Raw numbers are read from ndncatchunks directly. Reported numbers are 
> average and stdev over 18 flows (3 flows per trial).
> 
> *Goodput (Mbps)*
> SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
> ----------|----------|------------|--------|-----------
> NFD-small | 63.88    | 0.27       | 61.27  | 2.52
> NFD-large | 65.68    | 1.55       | 85.17  | 1.91
> 
> *Total number of lost/retransmitted segments*
> SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
> ----------|----------|------------|--------|-----------
> NFD-small | 361.61   | 75.00      | 164.00 | 114.51
> NFD-large | 226.28   | 164.70     | 0.00   | 0.00
> 
> *Window decreases*
> SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
> ----------|----------|------------|--------|-----------
> NFD-small | 1.94     | 0.24       | 1.56   | 0.51
> NFD-large | 1.61     | 0.50       | 0.00   | 0.00
> 
> *RTT avg (ms)*
> SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
> ----------|----------|------------|--------|-----------
> NFD-small | 164.87   | 22.42      | 153.04 | 23.14
> NFD-large | 174.48   | 18.36      | 8.19   | 0.11
> 
> Yours, Junxiao
> 
> On Mon, Feb 4, 2019 at 1:35 PM Chengyu Fan <chengy.fan at gmail.com 
> <mailto:chengy.fan at gmail.com>> wrote:
> 
> 
> 
>         ndncatchunks file transfer tools achieved
>         <https://imsure.github.io/ndn-vegas-report.pdf> a 15~25Mbps
>         goodput with ndncatchunks file transfer tool, using his TCP
>         Vegas congestion control scheme.
> 
> 
>     @Junxiao Shi <mailto:shijunxiao at email.arizona.edu> , could you tell
>     us more about the test, such as the topology, the machine
>     specification, nfd version, and CS is used or not?
> 
> 
> This is a report I found online. I do not know details. The related 
> GitHub repositories were last updated in Dec 2017.

I wouldn't put too much value on the report that Junxiao has linked. 
There seems to be something wrong with the TCP Cubic implementation. How 
can it be slower than AIMD?

Here are some more recent measurements from the last Hackathon, which 
achieve higher throughput than 25Mbps with similar RTTs: 
https://redmine.named-data.net/attachments/download/878/catchunks_perf.pdf

As you observed below, catchunks is much faster on the localhost vs. 
using a network link. I don't really know why that is, since in both 
cases (assuming link BW is high enough) CPU seems to be the limiting factor.

Best regards,
Klaus


> 
>     My recent test uses ndncatchunks running on nfd 0.6.4 (CS size was
>     set to 0) on my local laptop is around 400Mbps.
> 
> 
> Zero CS capacity doesn't make sense to me. With CS enabled, I got 
> comparable numbers: 383~394Mbps total for six simultaneous flows. If 
> Data come from CS instead of producers, it's even faster.
> 
>         NFD's forwarding performance was benchmarked
>         <https://redmine.named-data.net/issues/3564#note-24> in 2016 to
>         be 6.8K Interest-Data exchanges (tested with 10-byte segments,
>         but if it were 1300-byte segments, the goodput is 70Mbps); no
>         newer benchmark reports are available.
> 
> 
>     The NFD benchmark was done for version 0.4.2.
>     I remember there were some improvements after the benchmark, and the
>     results should be better than 6.8K Interest-Data exchanges.
> 
> 
> Maybe, but there's no published benchmark results. I would say, the 
> ndn-traffic-generator benchmarks are less relevant than ndncatchunks 
> benchmarks, so there's no need to redo those.
> 
> 
> _______________________________________________
> Ndn-interest mailing list
> Ndn-interest at lists.cs.ucla.edu
> http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-interest
> 


More information about the Ndn-interest mailing list