[Mini-NDN] Setting cache size for hosts

Ashlesh Gawande (agawande) agawande at memphis.edu
Tue Nov 3 18:36:08 PST 2020


https://github.com/named-data/mini-ndn/blob/5dbf99dd7851b1bb3c83f5910b0078e44e8ec715/minindn/apps/nfd.py#<https://github.com/named-data/mini-ndn/blob/5dbf99dd7851b1bb3c83f5910b0078e44e8ec715/minindn/apps/nfd.py#L30> L30<https://github.com/named-data/mini-ndn/blob/5dbf99dd7851b1bb3c83f5910b0078e44e8ec715/minindn/apps/nfd.py#L30>

Cache size for NFD is controlled in the code, not in the topology file (set 0 to disable)
https://github.com/named-data/mini-ndn/blob/5dbf99dd7851b1bb3c83f5910b0078e44e8ec715/examples/mnndn.py#<https://github.com/named-data/mini-ndn/blob/5dbf99dd7851b1bb3c83f5910b0078e44e8ec715/examples/mnndn.py#L43> L43<https://github.com/named-data/mini-ndn/blob/5dbf99dd7851b1bb3c83f5910b0078e44e8ec715/examples/mnndn.py#L43>

To set differently on different nodes, pass individual nodes instead of ndn.net<http://ndn.net>.hosts above iirc.

Ashlesh

Ashlesh

________________________________
From: Mini-NDN <mini-ndn-bounces at lists.cs.ucla.edu> on behalf of Andre via Mini-NDN <mini-ndn at lists.cs.ucla.edu>
Sent: Tuesday, November 3, 2020 6:19:18 PM
To: mini-ndn at lists.cs.ucla.edu <mini-ndn at lists.cs.ucla.edu>
Subject: [Mini-NDN] Setting cache size for hosts

Hello everyone,


I am running a set of experiments to evaluate the impact of different
cache sizes on nodes by running consumers and producers around a set of
data with different payload sizes. To set the cache on the topology
file, I'm using the cache parameter as follows:

[nodes]
a1: _ radius=0.5 angle=2.64159265359
b1: _ cache=0 radius=0.6 angle=3.64159265359
c1: _ cache=1 radius=1 angle=1.57079632679
d1: _ radius=1 angle=4.71238898038
[links]
a1:b1 delay=10ms
a1:c1 delay=10ms
b1:d1 delay=500ms


The topology seems to be correctly parsed, as the hostnames match and
the first transmission times make sense. However, setting a node with
cache=0 or cache=1, does not seem to have an impact since it still
manages to keep data in its local cache, resulting in very low
transmission times once the data has been received for the first time.

[main] reading results for consumer node=b1
[readResultFile] <Transmission> interest=/C2Data/a1/C2Data-1-Type1,
timeDiff=57186.000000, info=DATA, timeSinceEpoch=04/11/2020 02:01:05.653857
[readResultFile] <Transmission> interest=/C2Data/a1/C2Data-1-Type1,
timeDiff=599.000000, info=DATA, timeSinceEpoch=04/11/2020 02:01:09.212475
[readResultFile] <Transmission> interest=/C2Data/a1/C2Data-1-Type1,
timeDiff=683.000000, info=DATA, timeSinceEpoch=04/11/2020 02:01:10.579962
[readResultFile] <Transmission> interest=/C2Data/a1/C2Data-1-Type1,
timeDiff=704.000000, info=DATA, timeSinceEpoch=04/11/2020 02:01:13.073228


So I'm wondering if there is some conceptual fault to my setup, or maybe
there is an error to the way I used the topology file (which I have
based on the now deprecated minindnedit) to set the cache.


Thanks in advance,

André D. Carneiro

_______________________________________________
Mini-NDN mailing list
Mini-NDN at lists.cs.ucla.edu
http://www.lists.cs.ucla.edu/mailman/listinfo/mini-ndn
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.lists.cs.ucla.edu/pipermail/mini-ndn/attachments/20201104/cd468721/attachment.html>


More information about the Mini-NDN mailing list