[Ndn-interest] NDN packet size
susmit at cs.colostate.edu
Tue Mar 28 17:52:15 PDT 2017
Not much to add here but we did some experiments with large packet sizes. I
am currently uploading a tech report on the NDN website, I will send out a
link when it's uploaded.
In theory, you should only need to change the NDN_MAX_PACKET_SIZE in
tlv.hpp. I have tested this with packets that were several hundred
Megabytes, so I can confirm that large packets work with current
implementation. I have not tested ndn-tools, so there might be a bug there.
If so, please open a bug report in redmine.
The primary benefit of large packet sizes are overall lower packet signing
and verification cost. This will provide better throughput benefit up to
some large packet size and then remain pretty constant. But as others
mentioned before, memory consumption needs to be taken into account - this
will increase linearly with packet size. Packet processing time will also
increase linearly with packet size.
As for UDP limit on linux, it seems pretty large -
$ sysctl net.unix.max_dgram_qlen
net.unix.max_dgram_qlen = 512
$ sysctl net.core.wmem_max
net.core.wmem_max = 134217728
$ sysctl net.core.rmem_max
net.core.rmem_max = 134217728
On Tue, Mar 28, 2017 at 4:30 PM, Alex Horn <nano at remap.ucla.edu> wrote:
> I will also add that if you're doing UDP, and want consistency over the
> existing internet - I'd suggest 4k, as any UDP packets larger than that
> risk being filtered by existing IP-based systems
> IE - early testbed ndnvideo app box would not route to UIUC app box unless
> packet size was 3800 bytes (enough for 4k overhead)
> On Tue, Mar 28, 2017 at 4:56 PM, Junxiao Shi <shijunxiao at email.arizona.edu
> > wrote:
>> Hi Klaus
>> The setting of 8800 octets is indeed based on the reasons given by Nick
>> Briggs, which he posted to CCNx mailing list years ago. But that doesn't
>> answer why 8800 octets is a limit in the code rather than a recommendation.
>> The reason for having the practical limit is to reduce memory usage.
>> To receive a packet via socket API, NFD needs to allocate a buffer before
>> asking the kernel to copy the packet into this buffer. Since the packet
>> size is unknown at that time, NFD allocates a buffer of 8800 octets.
>> After the packet is received, assuming it is not fragmented by NDNLP, the
>> buffer stays around as long as the packet is needed (in PIT or CS), even if
>> the packet is much smaller than 8800 octets. The alternative would be
>> truncating the buffer to fit the actual packet size, but that involves
>> another copying, and we decide to save a copying at the expense of wasting
>> some memory (the difference between 8800 octets and the actual packet size).
>> Suppose we increase the practical limit to 1MB, NFD would allocate a 1MB
>> packet before receiving a packet. If most packets we are dealing with is
>> much smaller than 1MB, a lot of memory will be wasted.
>> Yours, Junxiao
>> On Tue, Mar 28, 2017 at 2:40 PM, Nick Briggs <nicholas.h.briggs at gmail.com
>> > wrote:
>>> It gets you 8K bytes of data along with necessary metadata and a name
>>> that isn't too long in a Content packet.
>>> It can be encapsulated in a UDP packet with about 6 fragments when
>>> you're do IP encapsulation. (max UDP is 64K bytes)
>>> It fits within a 9000 byte jumbo ethernet frame if you're doing direct
>>> ethernet encapsulation.
>>> -- Nick Briggs
>> On Tue, Mar 28, 2017 at 2:35 PM, Klaus Schneider <klaus at cs.arizona.edu>
>>> Btw, what was the reason for setting the default max packet size to 8800
>>> in the first place?
>>> Is there any drawback to increasing the default, to let's say 1 Megabyte?
>>> Best regards,
>> Ndn-interest mailing list
>> Ndn-interest at lists.cs.ucla.edu
> Ndn-interest mailing list
> Ndn-interest at lists.cs.ucla.edu
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Ndn-interest