[Ndn-interest] Regarding RTT for ndncatchunks in raspberry pi

Athreya Nagaraj indiathreya92 at gmail.com
Sun Jan 19 22:13:29 PST 2020


Hi Davide

Thank you for your response. I would like to share what I learned when
doing this experiment about buffer size. When I increase the buffer size,
the throughput does increase, but only after a certain value of buffer size
(say 40MB buffer for a 100MB file). From this, I believe that the
performance is dependent on the file size and buffer size.

I also tried another experiment where I disabled tc in TCP/IP stack and
transferred the same files. The throughput of TCP has decreased almost to
the same amount as NFD. Does NFD use tc? Sorry if the question sounds
silly, but I just want to be sure whether tc layer might be the cause?

Thanks and regards
Athreya H N

On Fri, Jan 17, 2020 at 4:45 AM Davide Pesavento <davidepesa at gmail.com>
wrote:

> The buffer size used by ethernet faces was recently increased to 4
> MiB. You may want to play around with that number and see how it
> affects latency and throughput. The setting is currently hardcoded so
> you need to recompile NFD every time you change it:
>
> https://github.com/named-data/NFD/blob/fb034219ef91ecea6a104bbaaa9310b16c40c977/daemon/face/pcap-helper.cpp#L55
> Note, however, that if the buffer is too small you will incur higher
> packet loss in case of, say, bursty traffic.
>
> Davide
>
> On Wed, Jan 8, 2020 at 11:47 AM Athreya Nagaraj <indiathreya92 at gmail.com>
> wrote:
> >
> > Hi
> >
> > I'm not using fixed-size pipeline because it won't collect rtt
> information. So I changed the cubic class such that if cwnd value goes over
> 15, it won't increase further.
> >
> > I'm using face type ethernet. I've pulled the latest code of ndn-cxx,
> NFD and ndn-tools from git just a few days back and built it.
> >
> > On Wed, 8 Jan 2020 at 9:50 PM, Davide Pesavento <davidepesa at gmail.com>
> wrote:
> >>
> >> What do you mean by "limiting the value of interest pipeline"? Are you
> >> using the fixed-size pipeline with ndncatchunks?
> >>
> >> Also, what is the type of the face between the Pis? Ethernet or UDP?
> >> And what versions of ndn-tools and NFD are you using?
> >>
> >> Thanks,
> >> Davide
> >>
> >> On Wed, Jan 8, 2020 at 7:45 AM Athreya Nagaraj <indiathreya92 at gmail.com>
> wrote:
> >> >
> >> > Hello all
> >> >
> >> > I have got an update and another question on this topic. I have
> reduced the test-bed size to two RPis. one being the producer and the other
> being the consumer. I figured out that by limiting the value of interest
> pipeline to around 15, the RTT decreases substantially enough to match the
> performance of TCP (without reducing throughput). However, I'm curious as
> to why this might be happening and are there any changes I can do to the
> configuration to keep the interest pipeline value over 100 and still have
> the same performance.
> >> >
> >> > My wild guess about this might be due to the buffer size. So
> regarding this, I have a question. When the ndn-cxx library writes the
> segment to the NIC buffer, will it write the contents of the segment or
> does it store a pointer to it in the queue? It would be great if someone
> can point me towards where this is implemented in ndn-cxx library or NFD.
> >> >
> >> > Thanks in advance
> >> > Athreya H N
> >> >
> >> > On Thu, Jan 2, 2020 at 10:17 PM Athreya Nagaraj <
> indiathreya92 at gmail.com> wrote:
> >> >>
> >> >> No. I've not increased the interest lifetime. Is there any other
> possible cause of such a high max rtt value?
> >> >>
> >> >> On Thu, 2 Jan 2020 at 10:07 PM, Davide Pesavento <
> davidepesa at gmail.com> wrote:
> >> >>>
> >> >>> Actually, ndncatchunks does not take RTT measurements for
> >> >>> retransmitted segments. So a max RTT of more than 27 seconds looks
> >> >>> very suspicious to me, unless you also increased the Interest
> >> >>> lifetime.
> >> >>>
> >> >>> Davide
> >> >>>
> >> >>> On Thu, Jan 2, 2020 at 11:09 AM Lan Wang (lanwang) <
> lanwang at memphis.edu> wrote:
> >> >>> >
> >> >>> > I assume the min RTT 4.777ms is closer to the actual RTT.   The
> RTT measurements from catchunks include the timeouts and retransmissions,
> so you can see the average and max are much larger.
> >> >>> >
> >> >>> > Lan
> >> >>> >
> >> >>> > On Dec 30, 2019, at 11:25 PM, Athreya Nagaraj <
> indiathreya92 at gmail.com> wrote:
> >> >>> >
> >> >>> > Hi Lan
> >> >>> >
> >> >>> > Thank you for your response.
> >> >>> >
> >> >>> > Please find below the output of ndncatchunks during one of the
> experiments-
> >> >>> >
> >> >>> > All segments have been received.
> >> >>> > Time elapsed: 55.1154 seconds
> >> >>> > Segments received: 23832
> >> >>> > Transferred size: 104858 kB
> >> >>> > Goodput: 15.220085 Mbit/s
> >> >>> > Congestion marks: 69 (caused 5 window decreases)
> >> >>> > Timeouts: 414 (caused 5 window decreases)
> >> >>> > Retransmitted segments: 347 (1.43513%), skipped: 67
> >> >>> > RTT min/avg/max = 4.777/144.127/27253.940 ms
> >> >>> >
> >> >>> > On Mon, Dec 30, 2019 at 11:50 PM Lan Wang (lanwang) <
> lanwang at memphis.edu> wrote:
> >> >>> >>
> >> >>> >> How did you measure the RTT during the catchunks transfer?
> Maybe you can send the catchunks output (or at least part of it)?
> >> >>> >>
> >> >>> >> Lan
> >> >>> >>
> >> >>> >> On Dec 29, 2019, at 9:58 PM, Athreya Nagaraj via Ndn-interest <
> ndn-interest at lists.cs.ucla.edu> wrote:
> >> >>> >>
> >> >>> >> Hi all
> >> >>> >>
> >> >>> >> I have used the term 'bus topology', which is wrong. The
> topology I have used is 4 raspberry pi devices connected via 3
> point-to-point links in a linear fashion. I've attached a representative
> topology diagram. I apologize for my mistake.
> >> >>> >>
> >> >>> >> Thanks and Regards
> >> >>> >> Athreya H N
> >> >>> >>
> >> >>> >> On Sun, Dec 29, 2019 at 10:39 PM Athreya Nagaraj <
> indiathreya92 at gmail.com> wrote:
> >> >>> >>>
> >> >>> >>> Hi all
> >> >>> >>>
> >> >>> >>> I'm a student working on a testbed of NDN. The testbed consists
> of four raspberry pi connected in a bus topology. The two end devices act
> as producers and consumers for NDN data. The middle two devices act as
> routers. I use ndncatchunks to send a 100 MB file through the testbed. I
> observe that the RTT for this is significantly more (around 10 times more)
> than that for an FTP application on the same testbed. The throughput is
> also lesser compared to FTP (around 20% lesser for NDN). I was wondering
> what could cause this difference.
> >> >>> >>>
> >> >>> >>> Also, another observation I made was that when I was testing
> the testbed setup with ndnping, the RTT was not so high.
> >> >>> >>>
> >> >>> >>> I have also previously worked on similar topology with NDN and
> the machines used were desktop machines. In this case, NDN was better than
> FTP.
> >> >>> >>>
> >> >>> >>> Any thoughts on what could be causing this?
> >> >>> >>>
> >> >>> >>> Thanks and Regards
> >> >>> >>> Athreya H N
> >> >>> >>
> >> >>> >> <Untitled
> Diagram.png>_______________________________________________
> >> >>> >> Ndn-interest mailing list
> >> >>> >> Ndn-interest at lists.cs.ucla.edu
> >> >>> >> http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-interest
> >> >>> >>
> >> >>> >>
> >> >>> >
> >> >>> > _______________________________________________
> >> >>> > Ndn-interest mailing list
> >> >>> > Ndn-interest at lists.cs.ucla.edu
> >> >>> > http://www.lists.cs.ucla.edu/mailman/listinfo/ndn-interest
> >> >>
> >> >> --
> >> >> Regards,
> >> >> Athreya H N
> >
> > --
> > Regards,
> > Athreya H N
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.lists.cs.ucla.edu/pipermail/ndn-interest/attachments/20200120/0f49eea8/attachment.html>


More information about the Ndn-interest mailing list