[ndnSIM] Memory and Lfu (replacement policy of Content Store)
alexander.afanasyev at ucla.edu
Tue Mar 19 07:15:11 PDT 2013
PIT also could have contributed to a large (and growing) memory utilization. Are Interests in your scenario getting satisfied? You can try to set limits on PIT size, for example, ndn::StackHelper::SetPit("ns3::ndn::pit::Persistent", "MaxSize", "5000"), and see what could happen with memory.
I have fully tested memory utilization tool only on Mac computer, so it may need certain adjustments to be run on other platforms (in master branch, non-OSX code is commented out because of compatibility issues which have not been resolved yet).
Thanks for your suggestion. Caching (relative long-term) / buffering (short-term, to recover from "last-hop" losses and prevent re-expressed Interests to go back to the producer) are essential components of NDN architecture and the more options we have in ndnSIM, the more clearly we can see what works (under which assumptions/environments) and what's not. Would you be willing to implement such a random caching policy? There is already an implementation of random caching replacement policy (ns3::ndn::cs::Random using utils/trie/random-policy.h), but this implementation is rather simplistic and doesn't allow one to configure probabilities.
On Mar 19, 2013, at 3:52 AM, Saran Tarnoi <sarantarnoi at gmail.com> wrote:
> Hi Alex and everyone,
> Sorry for the late reply.
> In my simulation, there are 100 nodes.
> The capacity of the CS of each node is 5,000.
> There are 1,000,000 different content objects.
> According to your guide, the required memory should be 702 bytes*5000*100 = 351 MB.
> I don't know why it takes more than 8 GB.
> In addition, the required memory increases as a function of the simulation time.
> I have tried your suggested example.
> I had no luck, the column of MemUsage consistently shows "-9.53674e-07."
> I will try to see what I can do.
> Oh, I have a suggestion for additional function in ndnSIM.
> In the current version of ndnSIM, I understand that every content is cached in every node it traverses.
> It may be good if we can randomly cache a content object at a certain probability for each node.
> Please feel free to let me know what you think.
> Thanks for your help.
> Saran Tarnoi
> 2013/2/20 Alex Afanasyev <alexander.afanasyev at ucla.edu>
> Hi Saran,
> How many nodes are in your simulation? I just checked memory overhead for different replacement policies and they are about the same (with Lfu taking a little bit more). As for numbers, one CS entry in the current implementation corresponds to about 708 bytes memory footprint with Lfu policy (with freshness) and about 650 bytes with Lru policy. (I haven't yet had time to investigate why so much, as I was expecting about 10 times smaller footprint.)
> You can get commit 41684ab625b165, which gives an example of how to evaluate memory footprint (ndn-simple-with-cs-lfu.cc)
> On Feb 19, 2013, at 5:02 AM, Saran Tarnoi <sarantarnoi at gmail.com> wrote:
>> To Alex and everyone,
>> I conducted a simulation to see how Lfu performs.
>> Content number is set as "100,000."
>> It appeared that the simulator consumed memory more than that my laptop can provide (RAM 4GB), then my laptop stopped working.
>> The problem did not appear when I used the other replacement policies (Lru, Random) for Content Store.
>> Would you kindly give me some ideas?
> Saran Tarnoi
> Graduate Student
> Department of Informatics
> The Graduate University for Advanced Studies (Sokendai)
> Tokyo, Japan
> ndnSIM mailing list
> ndnSIM at lists.cs.ucla.edu
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ndnSIM