[Mini-NDN] [EXT]Re: Issues with NACKs

Andre adcarneiro at inf.ufrgs.br
Sun Feb 7 11:32:25 PST 2021


Hello Junxiao


Your guess was correct!! The producer application is killed after a 
while, though I'm still not sure why. I was able to verify that by 
running the experiment and checking the running processes with htop. I 
tried to catch any exceptions on the application or any exit points but 
there aren't any. Also checked the syslog to try to find out why they 
are being killed but it is still a mystery. I will write a following 
email if I find out why they are being killed, but I might just create 
some sort of application to relaunch the producers if they die.

Also, on my other computer, this issue does not happen. The producer 
applications run all the way though the experiment resulting in 0 NACKs. 
Both setups are using a VirtualBox VM created by the vagrantfile but 
they do run on different hardware.


Thank you so much for the insight!


Best regards,

André Dexheimer Carneiro


On 2/4/21 1:40 PM, Junxiao Shi wrote:
> Hi Andre
>
> Each Nack reason has a different meaning. I have seen too many people 
> bashing Nack in general without understanding the difference between 
> reason codes.
>
> $ grep onIncomingNack $(find -name nfd.log) | awk '{ split($6,a,"~"); 
> print a[length(a)] }'  | sort | uniq -c
>     163 Duplicate
>   12648 NoRoute
> $ grep onOutgoingNack $(find -name nfd.log) | awk '{ split($6,a,"~"); 
> print a[length(a)] }'  | sort | uniq -c
>     161 Duplicate
>   15577 NoRoute
>
> I can see that you are getting mostly Nack~NoRoute packets. These are 
> generated if a forwarder receives an Interest but cannot find a 
> matching FIB entry.
>
> In v1/log/nfd.log, I found the following:
> 1612407736.138128 DEBUG: [nfd.Forwarder] onIncomingInterest in=(269,0) 
> interest=/ndn/v1-site/v1/C2Data-1891-Type3
> 1612407736.138181 DEBUG: [nfd.Forwarder] onContentStoreMiss 
> interest=/ndn/v1-site/v1/C2Data-1891-Type3
> 1612407736.138228 DEBUG: [nfd.Forwarder] onOutgoingNack out=269 
> nack=/ndn/v1-site/v1/C2Data-1891-Type3~NoRoute OK
> 1612407736.138329 DEBUG: [nfd.Forwarder] onInterestFinalize 
> interest=/ndn/v1-site/v1/C2Data-1891-Type3 unsatisfied
>
> I see that face 274 is responsible for this prefix:
> 1612407674.566604  INFO: [nfd.RibManager] Adding route /ndn/v1-site/v1 
> nexthop=274 origin=app cost=0
>
> However, the face was closed before the above Interest:
> 1612407735.703876  INFO: [nfd.Transport] 
> [id=274,local=unix:///run/v1.sock,remote=fd://164] setState UP -> CLOSING
> 1612407735.703902  INFO: [nfd.Transport] 
> [id=274,local=unix:///run/v1.sock,remote=fd://164] setState CLOSING -> 
> CLOSED
> 1612407735.704135  INFO: [nfd.FaceTable] Removed face id=274 
> remote=fd://164 local=unix:///run/v1.sock
> 1612407735.704327 DEBUG: [nfd.RibManager] Received notification for 
> destroyed FaceId 274
> 1612407735.704432 DEBUG: [nfd.Readvertise] remove-route 
> /ndn/v1-site/v1(274,app) not-readvertised
>
> My guess is that the producer application has crashed, so that all 
> subsequent Interests do not have a FIB match and thus get Nacks.
>
> Yours, Junxiao
>
> On Wed, Feb 3, 2021 at 10:31 PM Andre via Mini-NDN 
> <mini-ndn at lists.cs.ucla.edu <mailto:mini-ndn at lists.cs.ucla.edu>> wrote:
>
>     I forgot to mention the NLSR sleep though I did try a few
>     different values for it, between 40 and 200 seconds. Going lower
>     than that did make it worse, but in that range it does not make
>     much difference.
>
>
>     Testing a bit further now, it seems that the probability of
>     getting a NACK increased over time, pretty much all hosts got the
>     data in the first few transmissions and then started getting more
>     and more NACKs.
>
>
>     I tested some producer-consumer pairs in the CLI after the script
>     had run, and there does not seem to be an issue with connectivity
>     between the hosts, even in the cases where there were NACKs. Is
>     there another way to test it, or is this method right?
>
>
>     I attached the .conf topology file I'm using to the email as well
>     as the debug logs for NFD and NLSR for all nodes. There is also
>     this graph that represents the topology.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.lists.cs.ucla.edu/pipermail/mini-ndn/attachments/20210207/15a220c4/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: kjhlhpgnjpplpfhe.png
Type: image/png
Size: 167122 bytes
Desc: not available
URL: <http://www.lists.cs.ucla.edu/pipermail/mini-ndn/attachments/20210207/15a220c4/attachment-0001.png>


More information about the Mini-NDN mailing list