[Mini-NDN] Issues installing MiniNDN

Saurab Dulal (sdulal) sdulal at memphis.edu
Mon Sep 21 21:55:18 PDT 2020


________________________________
From: André D. Carneiro <adcarneiro at inf.ufrgs.br>
Sent: Sunday, September 20, 2020 6:30 PM
To: Saurab Dulal (sdulal) <sdulal at memphis.edu>
Subject: Re: [Mini-NDN] Issues installing MiniNDN


Thanks very much for your help, indeed my installation was fine. I was not advertising the producer's prefix, I was just using the producer and consumer from the examples folder, slightly modified to capture transmission times and set interest filters from the command line so I could use them from within the python script (ex. `host.cmd('producer /a')`).


Running a few tests now I noticed that when I advertise the producer's prefix, run a producer and a consumer with that same interest filter, then the data is cached. So that the next time the same consumer runs, the data is received with a much lower delay. However, if I advertise that same prefix (say `a nlsrc advertise /a`) and run a consumer with an interest filter such as `/a/data0`, the data is transmitted and received but not cached, as can be seen by the delay which stays at around the same value, about 20ms.


- I don't know if I understand this fully. AFAIK the data should be cached even if your interest filter is on "/a/data0". I did some testing and saw the data cached in both the cases. I assume you are only setting interest filters at the producer's node and not on the consumer? Regarding the caching, and the intermediate node will cache a copy of the data packet for a certain period based on the caching policy.



Does this mean that every interest filter used by a consumer must be advertised using `nlsrc advertise` by the producer so it can be cached by other hosts/routers in the topology?



- nlsrc advertise means every other node in the network will create routes towards that prefix. If the prefix (say /a) is not advertised (or the routes are not established by any other means), none of the interest sent under that name (e.g. interest: /a) will reach the producer or will be forwarded (consumer will receive NoRoute Nack). Regarding the caching, if data comes back, by any means, it will be cached in the intermediate nodes regardless of interest filter unless it's unsolicited.



- Saurab



Regards,

André Dexheimer Carneiro




On 20/09/2020 15:57, Saurab Dulal (sdulal) wrote:


________________________________
From: Mini-NDN <mini-ndn-bounces at lists.cs.ucla.edu><mailto:mini-ndn-bounces at lists.cs.ucla.edu> on behalf of Lan Wang (lanwang) <lanwang at memphis.edu><mailto:lanwang at memphis.edu>
Sent: Saturday, September 19, 2020 6:27 PM
To: Ritu Bordoloi via Mini-NDN <mini-ndn at lists.cs.ucla.edu><mailto:mini-ndn at lists.cs.ucla.edu>
Subject: [Mini-NDN] Issues installing MiniNDN


From: André D. Carneiro <adcarneiro at inf.ufrgs.br<mailto:adcarneiro at inf.ufrgs.br>>
Subject: Issues installing MiniNDN
Date: September 19, 2020 at 4:34:30 PM CDT
To: mini-ndn at lists.cs.ucla.edu<mailto:mini-ndn at lists.cs.ucla.edu>.


Hi all,

I'm having trouble getting MiniNDN up and running. I figured the easiest way to install is by using the vagrantfile. So I cloned the repo for named-data/mini-ndn and ran `vagrant up` inside it, which created the virtual machine in VirtualBox. I then proceeded to install miniNDN inside it via `sudo ./install.sh -a`, which downloaded and compiled everything successfully.

The first thing I noticed is that it did not create a `minindn` symbolic link, which is not a real problem for me because running the python scripts inside the examples folder still starts minindn and lets me in to the mini-ndn> CLI. Just thought is was curious.
Mini-NDN is not installed in the system anymore, so the minindn command doesn't exist. For more details please see the latest release notes: http://minindn.memphis.edu/release-notes.html#mini-ndn-version-0-5-0-major-changes-since-version-0-4-0

Anyway, the real problem is that when I run `sudo python examples/mnndn.py`, everything starts and I get in to the CLI. But then, running pingall results in the following

mini-ndn> pingall
    *** Ping: testing ping reachability
    a -> b c X
    b -> a X d
    c -> X X X
    d -> X X X
    *** Results: 66% dropped (4/12 received)
This is completely fine, not a problem at all for ndn related stuff.

Quote from the previous email (Custom experiments in MiniNDN, Sending data, Error: No default identity):
- For every link, minindn creates a veth pair, meaning a-c is different than c-a. So, if you have a-c link in the topology, you can do a ping c but not the other way around.

More important info:
https://redmine.named-data.net/issues/3054
https://redmine.named-data.net/issues/4069
This means for the given default topology; every node won't be able to ping every other node and that's why we observer 66% dropped.
And after I run `sudo python examples/nlsr/pingall.py`, exit the CLI and run `grep -c content /tmp/minindn/*/ping-data/*.txt`, as stated in the documentation, I get the following:

/tmp/minindn/a/ping-data/b.txt:300
    /tmp/minindn/a/ping-data/c.txt:300
    /tmp/minindn/a/ping-data/d.txt:300
    /tmp/minindn/b/ping-data/a.txt:300
    /tmp/minindn/b/ping-data/c.txt:300
    /tmp/minindn/b/ping-data/d.txt:300
    /tmp/minindn/c/ping-data/a.txt:300
    /tmp/minindn/c/ping-data/b.txt:300
    /tmp/minindn/c/ping-data/d.txt:300
    /tmp/minindn/d/ping-data/a.txt:300
    /tmp/minindn/d/ping-data/b.txt:300
    /tmp/minindn/d/ping-data/c.txt:300

As oposed to the count of 50 that should be reported for each file.
I don't understand why you expect 50. The default value for nPings (see nlsr_common.py) is 300, meaning each node will ping (ndnping) each other node 300 times and that's the number you are seeing it there. Unless you have different settings, the above result looks good to me.

Running `grep -c timeout /tmp/minindn/*/ping-data/*.txt` yields the apparently correct result of 0 for each file.

Note that I have already tried this same process on two different linux machines here. Having the same results.

I believe these problems are the cause for another error that I'm having in my experiments, where I get NACK for every consumer when trying to run a simple producer/consumer scenario.
Can you please provide more detail on this? Are you getting no-route Nack? if so, have you advertised your producer's prefix using nlsrc advertise command? or have set up the routes to the producer's prefix in any other way? For example, if a producer (say "b") is serving data for '/example' prefix, you need to advertise it from node "b" (b nlsrc advertise /example). After the advertisement, with the help of NLSR, every other node will be able to reach this prefix and so you won't observe any no-route nack.

Also, the documentation I refer to is this one: http://minindn.memphis.edu/install.html#using-install-sh


Best regards,
André Dexheimer Carneiro

Lastly, your Mini-NDN installation looks good to me.

Let us know if you are facing more issues.

Regards,
Saurab Dulal




_______________________________________________
Mini-NDN mailing list
Mini-NDN at lists.cs.ucla.edu<mailto:Mini-NDN at lists.cs.ucla.edu>
http://www.lists.cs.ucla.edu/mailman/listinfo/mini-ndn

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.lists.cs.ucla.edu/pipermail/mini-ndn/attachments/20200922/1c71bd0c/attachment-0001.html>


More information about the Mini-NDN mailing list