[Mini-NDN] [EXT]Re: Issues with NACKs

Junxiao Shi shijunxiao at email.arizona.edu
Thu Feb 4 08:40:59 PST 2021


Hi Andre

Each Nack reason has a different meaning. I have seen too many people
bashing Nack in general without understanding the difference between reason
codes.

$ grep onIncomingNack $(find -name nfd.log) | awk '{ split($6,a,"~"); print
a[length(a)] }'  | sort | uniq -c
    163 Duplicate
  12648 NoRoute
$ grep onOutgoingNack $(find -name nfd.log) | awk '{ split($6,a,"~"); print
a[length(a)] }'  | sort | uniq -c
    161 Duplicate
  15577 NoRoute

I can see that you are getting mostly Nack~NoRoute packets. These are
generated if a forwarder receives an Interest but cannot find a matching
FIB entry.

In v1/log/nfd.log, I found the following:
1612407736.138128 DEBUG: [nfd.Forwarder] onIncomingInterest in=(269,0)
interest=/ndn/v1-site/v1/C2Data-1891-Type3
1612407736.138181 DEBUG: [nfd.Forwarder] onContentStoreMiss
interest=/ndn/v1-site/v1/C2Data-1891-Type3
1612407736.138228 DEBUG: [nfd.Forwarder] onOutgoingNack out=269
nack=/ndn/v1-site/v1/C2Data-1891-Type3~NoRoute OK
1612407736.138329 DEBUG: [nfd.Forwarder] onInterestFinalize
interest=/ndn/v1-site/v1/C2Data-1891-Type3 unsatisfied

I see that face 274 is responsible for this prefix:
1612407674.566604  INFO: [nfd.RibManager] Adding route /ndn/v1-site/v1
nexthop=274 origin=app cost=0

However, the face was closed before the above Interest:
1612407735.703876  INFO: [nfd.Transport]
[id=274,local=unix:///run/v1.sock,remote=fd://164] setState UP -> CLOSING
1612407735.703902  INFO: [nfd.Transport]
[id=274,local=unix:///run/v1.sock,remote=fd://164] setState CLOSING ->
CLOSED
1612407735.704135  INFO: [nfd.FaceTable] Removed face id=274
remote=fd://164 local=unix:///run/v1.sock
1612407735.704327 DEBUG: [nfd.RibManager] Received notification for
destroyed FaceId 274
1612407735.704432 DEBUG: [nfd.Readvertise] remove-route
/ndn/v1-site/v1(274,app) not-readvertised

My guess is that the producer application has crashed, so that all
subsequent Interests do not have a FIB match and thus get Nacks.

Yours, Junxiao

On Wed, Feb 3, 2021 at 10:31 PM Andre via Mini-NDN <
mini-ndn at lists.cs.ucla.edu> wrote:

> I forgot to mention the NLSR sleep though I did try a few different values
> for it, between 40 and 200 seconds. Going lower than that did make it
> worse, but in that range it does not make much difference.
>
>
> Testing a bit further now, it seems that the probability of getting a NACK
> increased over time, pretty much all hosts got the data in the first few
> transmissions and then started getting more and more NACKs.
>
>
> I tested some producer-consumer pairs in the CLI after the script had run,
> and there does not seem to be an issue with connectivity between the hosts,
> even in the cases where there were NACKs. Is there another way to test it,
> or is this method right?
>
>
> I attached the .conf topology file I'm using to the email as well as the
> debug logs for NFD and NLSR for all nodes. There is also this graph that
> represents the topology.
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.lists.cs.ucla.edu/pipermail/mini-ndn/attachments/20210204/77dd3548/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: kjhlhpgnjpplpfhe.png
Type: image/png
Size: 167122 bytes
Desc: not available
URL: <http://www.lists.cs.ucla.edu/pipermail/mini-ndn/attachments/20210204/77dd3548/attachment-0001.png>


More information about the Mini-NDN mailing list