[Ndn-interest] Sending files with NDN (putchunks/ndnpoke)

Klaus Schneider klaus at cs.arizona.edu
Thu Apr 26 01:51:29 PDT 2018


I think it might be useful to run the tests with bigger files/for longer 
durations.

With only 0.5s to 1.0s durations, you are seeing mostly the effects of 
the slow start phase. This means that 1) it takes a while for the AIMD 
pipeline to reach the full bandwidth, so the observed throughput is 
lower than with longer runs.

And 2) the slow start might overshoot the available bandwidth, causing 
packet losses that happen only once during the whole file transfer.


For the fixed pipeline, it seems that a pipeline size of just 1 might be 
too high for your example, since you have a very small bandwidth-delay 
product (the link delay is less than 1ms)?

The rate is roughly the pipeline size (1) x chunk size (4.4KB) divided 
by the RTT (minRTT = 1ms) = roughly 35 Mbit/s.

If your link doesn't give you more than 35Mbit/s, I'd recommend to 
decrease the chunksize (ndnputchunks --size X).

Best regards,
Klaus


CCing Lan and Nick, since they had a similar problem.


On 26/04/18 01:31, César A. Bernardini wrote:
> I repeated the tests. (Of course, I have thousands of lines like these. 
> I just got a few that seemed to be representative).
> 
> These are the results:
> 
> TDP - ndncatchunks -d fixed /cesar --pipeline-type fixed
> All segments have been received.
> Time elapsed: 942.802 milliseconds
> Total # of segments received: 954
> Total size: 4194.3kB
> Goodput: 35.590116 Mbit/s
> 
> TcP - ndncatchunks -d fixed /cesar
> All segments have been received.
> Time elapsed: 590.845 milliseconds
> Total # of segments received: 954
> Total size: 4194.3kB
> Goodput: 56.790550 Mbit/s
> Total # of lost/retransmitted segments: 168 (caused 0 window decreases)
> Packet loss rate: 14.9866%
> Total # of received congestion marks: 2
> RTT min/avg/max = 0.857/90.039/285.858 ms
> 
> UDP - ndncatchunks -d fixed /cesar --pipeline-type fixed
> All segments have been received.
> Time elapsed: 922.561 milliseconds
> Total # of segments received: 954
> Total size: 4194.3kB
> Goodput: 36.370948 Mbit/s
> 
> UDP - ndncatchunks -d fixed /cesar
> All segments have been received.
> Time elapsed: 620.587 milliseconds
> Total # of segments received: 954
> Total size: 4194.3kB
> Goodput: 54.068881 Mbit/s
> Total # of lost/retransmitted segments: 381 (caused 0 window decreases)
> Packet loss rate: 28.5607%
> Total # of received congestion marks: 1
> RTT min/avg/max = 0.922/32.706/50.670 ms
> 
> Cheers,
> 
> 
> 2018-04-25 9:17 GMT+02:00 Klaus Schneider <klaus at cs.arizona.edu 
> <mailto:klaus at cs.arizona.edu>>:
> 
> 
> 
>     On 25/04/18 00:09, César A. Bernardini wrote:
> 
>         Hi Junxiao,
> 
>         Thanks for the comments, I have tried running NDN and exchange
>         contents over udp at this time -- with and without the
>         --pipeline-type fixed option. The results were super poor to
>         exchange 1mb file:
> 
>         The end-user delay that before I measured at 0.200s with
>         NDN/TCP, now become 0.770 seconds over UDP (and by adding the
>         --pipeline-type fixed it increased to 0.95). I repeated the
>         experiments on different days, but I am still getting this numbers.
> 
>         Any idea?
> 
> 
>     The fixed pipeline is expected to get worse results than
>     pipeline-aimd, since it's not adjusting to the available link bandwidth.
> 
>     Either the pipeline size is too high, then you'll see high delay and
>     many packet drops. Or the pipeline size is too low, the you're not
>     using the fulling bandwidth.
> 
>     For you it seems to be the former case. Maybe reduce it with the
>     "--pipeline-size" option. if --pipeline-size=1 is still too high,
>     you might want to add some link delay.
> 
>     Best regards,
>     Klaus
> 
> 
> 
> 
>         2018-04-23 16:16 GMT+02:00 Junxiao Shi
>         <shijunxiao at email.arizona.edu
>         <mailto:shijunxiao at email.arizona.edu>
>         <mailto:shijunxiao at email.arizona.edu
>         <mailto:shijunxiao at email.arizona.edu>>>:
> 
>              Hi Cesar
> 
>                  I kept checking on the example and find out that the
>         problem of
>                  speed is due to an optimization that happens in the
>         ESXi host.
>                  The ESXi converts all the traffic into local traffic and
>                  everything becomes just copies of memory. that explains
>         one of
>                  the problems I had.
> 
>              That's a trick in hypervisor. It won't happen in a real network
>              across devices.
> 
> 
>                  I went a bit further also with the analysis of the NDN
>         traffic
>                  and I figure out that there are two congestion
>         protocols: TCP +
>                  the Arizona Univ's provided cogestion protocol. I
>         understand
>                  that it has been developed because TCP was not
>         optimized for
>                  ICN. But is this protocol as necessary that is included by
>                  default for all the ICN users?
> 
>              NDN is not meant to be used over TCP.  Use Ethernet or UDP
>         instead.
> 
> 
>                  Shouldn't we create a patch and enable at compilation
>         time it
>                  only in case of need?
> 
>              You can disable AIMD congestion control in ndncatchunks at
>         runtime
>              with a command line option: --pipeline-type fixed . There's
>         no need
>              for re-compiling.
> 
>              Yours, Junxiao
> 
> 
> 


More information about the Ndn-interest mailing list