[Nfd-dev] 1-to-Many NDN-RTC test and hub strategy

Gusev, Peter peter at remap.UCLA.edu
Fri May 27 17:55:44 PDT 2016

Hi Junxiao,

NFD CPU usage on Hub never exceeds<http://ec2-52-90-158-238.compute-1.amazonaws.com:3000/dashboard/db/ndn-rtc-test-metrics?from=1464335773279&to=1464336374031&panelId=10&fullscreen> 20% (though increases somewhat logarithmically). All machines are t2.medium<https://aws.amazon.com/ec2/instance-types/> EC2 instances with 2 CPUs (vCPU) - Intel Xeon up to 3.3GHz.

I’ll post tcpdump results as soon as I’ll have them ready.


Peter Gusev

peter at remap.ucla.edu<mailto:peter at remap.ucla.edu>
+1 213 5872748
peetonn_ (skype)

Software Engineer/Programmer Analyst @ REMAP UCLA

Video streaming/ICN networks/Creative Development

On May 27, 2016, at 5:46 PM, Junxiao Shi <shijunxiao at email.arizona.edu<mailto:shijunxiao at email.arizona.edu>> wrote:

Hi Peter

In your setup, the FIB entry toward the producer has only one nexthop. As answered in this thread, the difference between multicast strategy and best-route v4 strategy is whether they permit consumer retransmissions: multicast strategy does not permit any retransmission when the PIT entry is still pending; best-route v4 strategy will forward the Interest again after 10ms suppression period.
This is the primary reason we want to see the tcpdump trace, to examine the timing among Interests from multiple consumers and estimate how many extra Interests are forwarded to the producer.

The reason I'm suspecting a "congestion" is that your Interest rate may have exceeded the processing power of NFD. This is not a congestion on the network, but a congestion caused by CPU power limitation.
As benchmarked in #1819 note-2, NFD can process 3500 Interest-Data exchanges per second with 6-component names, no Selectors in Interests, and very small Data packets, or 1100 exchanges for 26-component names. Interests in your scenario have approximately 10 components, so I'd estimate a throughput of 2000 exchanges per second.
More importantly, some Interests have ChildSelector=rightmost which could consume more than 150ms as shown in #2626 note-2. With 9 consumers expressing such Interests once every 3000ms, almost half of the processing power is spent for those Interests. This leaves us a throughput of 1000 Interest-Data exchanges per second.

You have quoted 200Kbps per consumer times ten consumers.
With 8000-octet payload size, that's 32 Interest-Data exchanges per second. It would incur the overhead of NDNLPv2 reassembly which we don't have a benchmark result.
With 1200-octet payload size, that's 208 Interest-Data exchanges per second. There's no reassembly overhead.
In either case, this should be within the throughput limit, if each NFD instance is allocated one CPU core that is as good as ONL's CPU cores.
However, if each NFD instance only gets 50% of a CPU core, almost all this CPU time would be eaten by the ChildSelector=rightmost Interests, leaving very little processing power for other Interests.

Can you confirm that the experiment environment have enough CPU cores, so that each NFD instance can use up 100% of a CPU core? If not, it's more likely to have a congestion.
The reason of getting better performance with multicast strategy is that it suppresses any retransmission regardless of timing, so that there are less Interests delivered to producers, and thus less computation power is needed.
I'll also wait for nfd-status outputs and pcap trace, as requested in my last message.

Yours, Junxiao

On Fri, May 27, 2016 at 4:47 PM, Gusev, Peter <peter at remap.ucla.edu<mailto:peter at remap.ucla.edu>> wrote:
From this face counters snapshot, I can see:
20695 Interests are forwarded to the producer (face 263), and 10558 Data come back.
There are ten consumers. Face 271 receives 15872 Interests, and 9959 Data are returned to this consumer. Other nine consumer faces only receive around 1000 Interests, and between 50 and 200 Data are returned.

I'm unsure whether there's a problem in forwarding, or it's simply a congestion.

I see two arguments supporting that it is not a congestion:
1/  everything works fine when I enable multicast on hub
2/ streams are around ~200Kbps which translates ~2Mbps for 10 consumers.
I’m not convinced that there could be a congestion here.

The difference of incoming Interest counter of face 271 and other consumers is most concerning.

The difference is not striking to me, because other 9 consumers never leave bootstrapping phase - i.e. they continue to issue Rightmost Interests with approximately 3000ms interval. Only one of the consumers is in Fetching phase with Interest rate of ~200 Interests/sec.

Can you add the following:
(a) nfd-status -f | grep udp4:// output on consumers and producer, along with nfd-status -fr output on HUB, captured at the same time or after traffic stops. This would show whether there's packet loss between end host and HUB.

I can check it now.

(b) tcpdump trace (in .pcap format) captured on the HUB's NIC. This would capture the timing of packets, which allows us to analyze the behavior of NFD.

This will take me more time…

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.lists.cs.ucla.edu/pipermail/nfd-dev/attachments/20160528/23258390/attachment.html>

More information about the Nfd-dev mailing list