[Ndn-interest] Regarding RTT for ndncatchunks in raspberry pi
Mosko, Marc <email@example.com>
mmosko at parc.com
Wed Jan 8 08:15:18 PST 2020
If you could, put a sniffer on the ethernet segment. Or you could run tcpdump on one of the systems, though this would add more load to the systems under test. I’ve seen beagle boards and pis throttle ethernet using flow control, likely due to system buffer issues. You could also see if flow control is enabled via mii-tool. If you see ethernet flow control packets on the segment, it is likely because the reader is not pulling packets out of the kernel fast enough so the receiver sends pause frames.
From: Ndn-interest <ndn-interest-bounces at lists.cs.ucla.edu> on behalf of Athreya Nagaraj via Ndn-interest <ndn-interest at lists.cs.ucla.edu>
Reply-To: Athreya Nagaraj <indiathreya92 at gmail.com>
Date: Wednesday, January 8, 2020 at 4:45 AM
To: Davide Pesavento <davidepesa at gmail.com>, "nfd-dev at lists.cs.ucla.edu" <nfd-dev at lists.cs.ucla.edu>, "nfd-lib at lists.cs.ucla.edu" <nfd-lib at lists.cs.ucla.edu>
Cc: "Mohit P. Tahiliani" <tahiliani at nitk.edu.in>, ndn-interest <ndn-interest at lists.cs.ucla.edu>, Edward Lu via Ndn-lib <ndn-lib at lists.cs.ucla.edu>
Subject: Re: [Ndn-interest] Regarding RTT for ndncatchunks in raspberry pi
I have got an update and another question on this topic. I have reduced the test-bed size to two RPis. one being the producer and the other being the consumer. I figured out that by limiting the value of interest pipeline to around 15, the RTT decreases substantially enough to match the performance of TCP (without reducing throughput). However, I'm curious as to why this might be happening and are there any changes I can do to the configuration to keep the interest pipeline value over 100 and still have the same performance.
My wild guess about this might be due to the buffer size. So regarding this, I have a question. When the ndn-cxx library writes the segment to the NIC buffer, will it write the contents of the segment or does it store a pointer to it in the queue? It would be great if someone can point me towards where this is implemented in ndn-cxx library or NFD.
Thanks in advance
Athreya H N
On Thu, Jan 2, 2020 at 10:17 PM Athreya Nagaraj <indiathreya92 at gmail.com<mailto:indiathreya92 at gmail.com>> wrote:
No. I've not increased the interest lifetime. Is there any other possible cause of such a high max rtt value?
On Thu, 2 Jan 2020 at 10:07 PM, Davide Pesavento <davidepesa at gmail.com<mailto:davidepesa at gmail.com>> wrote:
Actually, ndncatchunks does not take RTT measurements for
retransmitted segments. So a max RTT of more than 27 seconds looks
very suspicious to me, unless you also increased the Interest
On Thu, Jan 2, 2020 at 11:09 AM Lan Wang (lanwang) <lanwang at memphis.edu<mailto:lanwang at memphis.edu>> wrote:
> I assume the min RTT 4.777ms is closer to the actual RTT. The RTT measurements from catchunks include the timeouts and retransmissions, so you can see the average and max are much larger.
> On Dec 30, 2019, at 11:25 PM, Athreya Nagaraj <indiathreya92 at gmail.com<mailto:indiathreya92 at gmail.com>> wrote:
> Hi Lan
> Thank you for your response.
> Please find below the output of ndncatchunks during one of the experiments-
> All segments have been received.
> Time elapsed: 55.1154 seconds
> Segments received: 23832
> Transferred size: 104858 kB
> Goodput: 15.220085 Mbit/s
> Congestion marks: 69 (caused 5 window decreases)
> Timeouts: 414 (caused 5 window decreases)
> Retransmitted segments: 347 (1.43513%), skipped: 67
> RTT min/avg/max = 4.777/144.127/27253.940 ms
> On Mon, Dec 30, 2019 at 11:50 PM Lan Wang (lanwang) <lanwang at memphis.edu<mailto:lanwang at memphis.edu>> wrote:
>> How did you measure the RTT during the catchunks transfer? Maybe you can send the catchunks output (or at least part of it)?
>> On Dec 29, 2019, at 9:58 PM, Athreya Nagaraj via Ndn-interest <ndn-interest at lists.cs.ucla.edu<mailto:ndn-interest at lists.cs.ucla.edu>> wrote:
>> Hi all
>> I have used the term 'bus topology', which is wrong. The topology I have used is 4 raspberry pi devices connected via 3 point-to-point links in a linear fashion. I've attached a representative topology diagram. I apologize for my mistake.
>> Thanks and Regards
>> Athreya H N
>> On Sun, Dec 29, 2019 at 10:39 PM Athreya Nagaraj <indiathreya92 at gmail.com<mailto:indiathreya92 at gmail.com>> wrote:
>>> Hi all
>>> I'm a student working on a testbed of NDN. The testbed consists of four raspberry pi connected in a bus topology. The two end devices act as producers and consumers for NDN data. The middle two devices act as routers. I use ndncatchunks to send a 100 MB file through the testbed. I observe that the RTT for this is significantly more (around 10 times more) than that for an FTP application on the same testbed. The throughput is also lesser compared to FTP (around 20% lesser for NDN). I was wondering what could cause this difference.
>>> Also, another observation I made was that when I was testing the testbed setup with ndnping, the RTT was not so high.
>>> I have also previously worked on similar topology with NDN and the machines used were desktop machines. In this case, NDN was better than FTP.
>>> Any thoughts on what could be causing this?
>>> Thanks and Regards
>>> Athreya H N
>> <Untitled Diagram.png>_______________________________________________
>> Ndn-interest mailing list
>> Ndn-interest at lists.cs.ucla.edu<mailto:Ndn-interest at lists.cs.ucla.edu>
> Ndn-interest mailing list
> Ndn-interest at lists.cs.ucla.edu<mailto:Ndn-interest at lists.cs.ucla.edu>
Athreya H N
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Ndn-interest