[Ndn-interest] Maximum Throughput in NDN

Junxiao Shi shijunxiao at email.arizona.edu
Mon May 20 15:45:08 PDT 2019


Hi Davide & Klaus

> I have some recent numbers for ndncatchunks from April 2019. These were
> using NFD 0.6.5 and ndn-tools 0.6.3 on Ubuntu 16.04. Hardware is 2x Xeon
> E5-2640 and 32GB memory.
>
> The experiments should be repeated with ndn-tools 0.6.4, ndncatchunks uses
> a new CUBIC-like congestion control in that version.
>

Yes I'll do that soon.

> Raw numbers are read from ndncatchunks directly. Reported numbers are
> average and stdev over 18 flows (6 flows per trial).
>
> As I already privately told Junxiao some time ago, I believe these numbers
> are rather inconclusive, but they do raise some eyebrows and should
> motivate further investigations into some peculiar (if not pathological)
> behaviors of NFD that should be fixed or improved. On the other hand, some
> results may need to be dismissed due to incorrect testing methodology.
>

Can you elaborate on what is "incorrect testing methodology"?


> Btw, what do you mean "3 flows per trial"? If you start 6 prod/cons pairs,
> won't you get 6 concurrent flows?
>

It's a typo. It should be "6 flows per trial".

> Goodput (Mbps)
> > SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
> > ----------|----------|------------|--------|-----------
> > NFD-small | 63.88    | 0.27       | 61.27  | 2.52
> > NFD-large | 65.68    | 1.55       | 85.17  | 1.91
>
> So the CS is only beneficial if you get a large percentage of hits...?
> (The NFD-large/CS case has 100% hit ratio)
>

Looking at PROD avg column, NFD-small is slower than NFD-large because
forwarding thread needs to spend time evicting entries.
NFD-small CS avg column is possibly 0% hit because packets at front of
files are evicted when the second retrievals start.

> Total number of lost/retransmitted segments
> > SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
> > ----------|----------|------------|--------|-----------
> > NFD-small | 361.61   | 75.00      | 164.00 | 114.51
> > NFD-large | 226.28   | 164.70     | 0.00   | 0.00
>
> The stddev is huge on these numbers, which means either you need
> more trials or something weird is going on.
>

Yeah they are unstable. I don't have automated experiment runner yet so I
can only do three manual trials.

> RTT avg (ms)
> > SETUP     | PROD avg | PROD stdev | CS avg | CS stdev
> > ----------|----------|------------|--------|-----------
> > NFD-small | 164.87   | 22.42      | 153.04 | 23.14
> > NFD-large | 174.48   | 18.36      | 8.19   | 0.11
>
> With the exception of NFD-large/CS (where IIUC no packets should
> be forwarded to the producers), the RTT looks quite bad, considering
> the link delay is effectively zero. My guess is that there is too
> much buffering/queueing somewhere.
>

Kernel has finite buffering. Boost has no buffering. StreamTransport has
infinite buffering. #4499 <https://redmine.named-data.net/issues/4499>
strikes again.

> If Data come from CS instead of producers, it's even faster.
>
> This is not true in general. In fact, it has been proven false more than
> once in the past.
>

Because lookup is too slow?



> >
> > i7-6600U has higher frequency (2.60GHz) than E5-2640 (2.50GHz).


> But you have 12 CPU cores vs 2 on my laptop.
>

12 cores won't help because NFD forwarding is single threaded.
Also, i7-6600U is newer Skylake architecture than E5-2640 Sandy Bridge
architecture.

> Having only one flow does not reflect usual workload on a file server or
> > a web browser client.
>
> Yes, but 6 simultaneous flows should actually be *faster* in
> their combined throughput. At least that's the case when link bandwidth is
> the limiting factor.


Yes.


> Moreover, you can distribute the flows among different CPUs.
>

No. NFD forwarding is single threaded.

Yours, Junxiao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.lists.cs.ucla.edu/pipermail/ndn-interest/attachments/20190520/6662e3da/attachment.html>


More information about the Ndn-interest mailing list