[Nfd-dev] ndnputchunks memory usage increases linearly with MAX_NDN_PACKET_SIZE

Davide Pesavento davide.pesavento at lip6.fr
Thu Nov 16 21:40:02 PST 2017


Hi Susmit,

Are you saying that the memory usage increases by 1GB for each data
chunk? Or is it fixed at 1GB after ndnputchunks starts?

Keep in mind that the face on the application side has an internal
receive buffer of size MAX_NDN_PACKET_SIZE bytes (see
src/transport/stream-transport-impl.hpp in ndn-cxx), because it must
be prepared to receive a packet that large.

Davide

On Thu, Nov 16, 2017 at 11:50 PM, Susmit <susmit at cs.colostate.edu> wrote:
> Hi All,
>
> I am bit baffled by this:
>
> I found that the ndnputchunk producers's (in
> ndn-tools/tools/chunks/putchunks/producer.cpp) memory usage increases
> linearly with max packet size in cxx. It seems to be in the block
> where data packets are assigned final block ID and signed (line 134).
>
> Here is the strange part, everything seems to be correct in the
> chunk.cpp/chunk.hpp, I printed the encoded size of the data packet
> after data is signed and it seems correct. I am running master for
> tools and 0.6.0 for cxx and NFD.
>
> Here are the steps to reproduce:
>
> 1. Change the MAX_NDN_PACKET_SIZE to a large value, e.g., 1GB
> 2. Create a 10 MB text file, base64 /dev/urandom | head -c 10000000 > 10MB.txt
> 3. Try to load the said 10MB text file using smaller chunk size
> ndnputchunks -s 8800 /test < 10MB.txt
> 4. Watch your free memory : watch -n 1 free -m
>
> Can someone try to reproduce this?
>
> Thanks for your help!
>
> --
>
> ====================================
> http://www.cs.colostate.edu/~susmit
> ====================================
> _______________________________________________
> Nfd-dev mailing list
> Nfd-dev at lists.cs.ucla.edu
> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev


More information about the Nfd-dev mailing list