[Nfd-dev] Re-presenting the case against nonces

Ignacio.Solis at parc.com Ignacio.Solis at parc.com
Sun Mar 1 17:25:55 PST 2015


It is evident from these past email threads that nonces are still causing
problems for NDN.  Isn¹t it time you re-evaluate their use?

I would be one of the people that has argued that this is an architectural
problem.

>From the emails (and links) it has been stated that NONCES have issues in
detecting loops when the loop is longer than the PIT entry. This is
considering the simple case of interests not forking.  To solve this
problem (how to use nonces to detect loops reliably) the only possible
solution is to have really long lived PIT entries.  I would argue this
won¹t be feasible and it becomes an attack vector.

In the non-simple case of an interest being forked with the same nonce,
you may have a loop where the PIT entry is satisfied and no longer exists
by the time the looped interest arrives.  The current proposal to solve
this is to keep nonces around for a long time. Again, this increases the
memory usage at a router and you have to guess the time loops can take.

There is a second issue which arises from nodes dropping an interest if
the nonce has already been seen.  Whether the second nonce is from a
looped interest OR from a interest that has been forked in the past, there
is a problem with interest aggregation.  Any interest that has been
aggregated in a PIT entry created by the original interest (the forked one
with the original nonce) is at risk of being black-holed by dropping the
repeated nonce.

There are multiple ways to solve this.  One is to change how aggregation
is done (potentially the right choice). Another is to never fork an
interest (also a potentially valid choice). Yet another is to not drop the
interest with the repeated nonce.

I admit that I don¹t know what the NDN solution for this is.  I do know,
both from Alex and Lixia¹s email, that there is a
proposal/technique/suggestion that any router that forks interests should
pick a new nonce for the forked interest.

This seems like an interesting choice.  If you do pick a new nonce every
time you fork an interest, then the only use for nonces becomes loop
detection.  Since loop detection is more easily done with a hop limit,
then it¹s not clear why NDN is still keeping the nonce as a core feature.

Nonces:
A- Can¹t assure that loops halt
B- Break aggregation (if nodes discard repeated nonces and nodes fork
nonces with the same nonce).



So, again, if we¹re not doing B (because nodes use different nonces when
they fork) and nonces can¹t assure that they halt loops, then why not look
at other alternatives? (like hop limits)


There are situations when nonces might be useful for exploration.  There
are also situations where case B can be solved by having a separate
pruning protocol.  So, I my argument is not to get rid of nonces
completely, it¹s to make nonces optional.  After all, Lixia just stated
that nodes can change nonces, so effectively they¹re not required for
end-to-end operation.  Only some intermediate nodes need to understand
what they are about.

After all, given all the arguments NDN makes about exploration (like
having a flexible TLV), wouldn¹t it make sense to make nonces optional (so
other methods an be explored)?  Again, NDN argues for a TLV that is small
enough to carry 1+1, but you¹re forcing every packet to waste space in
carrying a nonce that is only useful in edge cases.

Nacho




--
Nacho (Ignacio) Solis
Protocol Architect
Principal Scientist
Palo Alto Research Center (PARC)
+1(650)812-4458
Ignacio.Solis at parc.com





On 3/1/15, 8:49 AM, "Dave Oran (oran)" <oran at cisco.com> wrote:

>Given that any Interest arriving after a PIT entry has been satisfied (or
>timed out) has the hazard of re-creating the PIT and getting processed
>when it shouldn¹t, the ³DeadNonceList² entries have to persist for at
>least the MPL (maximum packet lifetime) of the network. In the IP world,
>various pathologies can cause this period to be very long; on the order
>of minutes. What¹s the proposed DeadNonce timer for NFD? What¹s the worst
>case memory pressure? - seems like it could be gigantic for a high-speed
>router. Are there replay attacks that can cause the DeadNonceList to grow
>without bound?
>
>Some folks have argued this isn¹t an implementation question so much as
>an architecture question around the use of nonces for loop detection in
>the first place.
>
>DaveO.
>
>> On Mar 1, 2015, at 11:21 AM, Lan Wang (lanwang) <lanwang at memphis.edu>
>>wrote:
>> 
>> Junxiao,
>> 
>> Thank you very much for the explanation and finding the root cause of
>>the problem.  It is causing wild oscillations in the NDNPing delays
>>(switches back and forth between the best path and worse paths back and
>>forth) since the measurements that the strategy is based on were wrong.
>>I hope this could be fixed as soon as possible since we need the correct
>>results for our paper.
>> 
>> 
>> Conceptually the solution requires duplicate suppression (at least for
>>some time) after a PIT entry has been satisfied.  I thought the
>>DeadNonceList was designed for this purpose.  If you check the
>>DeadNonceList before the PIT entry, then you'll detect the duplicate
>>easily.   
>> 
>> However, if you do not want to check the DeadNonceList first (for
>>whatever reason I don't understand), the current Interest processing
>>flow has a couple issues if I understand correctly:
>> 
>> 1. the current logic assume that there's no need to record a dead nonce
>>if the PIT entry is satisfied.  Obviously the example shows that
>>although there's no danger of interest looping, there can be other
>>dangers, e.g., the wrong measurements shown in the example.
>> 
>> 2. the in-record and out-record were deleted when the PIT entry is
>>satisfied.  This causes problems since the current logic checks these
>>things before checking the DeadNonceList.
>> 
>> So to fix the above, the code needs to (a) record the dead nonce
>>regardless of whether the PIT entry is satisfied and also (b) check the
>>DeadNonceList even after a PIT entry is satisfied.
>> 
>> Thanks a lot.
>> 
>> Lan
>> 
>> On Mar 1, 2015, at 12:53 AM, Junxiao Shi <shijunxiao at email.arizona.edu>
>>wrote:
>> 
>>> Hi Lan
>>> 
>>> Trace of CSU node has:
>>> 	€ 5159.119 interestFrom UCLA
>>> 	€ 5159.140 interestTo MEMPHIS
>>> 	€ 5159.179 interestFrom UMICH
>>> 	€ 5159.245887 dataFrom MEMPHIS
>>> 	€ 5159.262 dataTo UCLA
>>> 	€ 5159.345243 interestFrom UIUC
>>> 	€ 5159.359 dataTo UIUC
>>> 
>>> Here's what happens:
>>> 
>>> 5159.119 upon receiving packet 1, new PIT entry is created with
>>>in-record of UCLA face
>>> 5159.140 Interest is forwarded to MEMPHIS as packet 2; out-record of
>>>MEMPHIS face is created
>>> 5159.179 packet 3 arrives, but it's suppressed due to duplicate Nonce
>>>in PIT entry
>>> 5159.245887 packet 4 arrives and satisfies the PIT entry; straggler
>>>timer is set to erase the PIT entry at 5159.345887
>>> 5159.262 Data is returned to UCLA as packet 5, but not UMICH because
>>>packet 2 was suppressed; in-records and out-record of MEMPHIS face are
>>>deleted; Nonce is not inserted to DeadNonceList because there's no risk
>>>of looping (see spec for exact condition)
>>> 5159.345243 packet 6 arrives and cancels the straggler timer; it hits
>>>ContentStore and is not forwarded
>>> 5159.345887 PIT entry won't be deleted now because straggler timer was
>>>cancelled by packet 6
>>> 5159.359 Data from ContentStore is returned to UIUC as packet 7
>>> 
>>> 
>>> The problematic step is: "in-records and out-record of MEMPHIS face
>>>are deleted".
>>> Originally, PIT entry has a NonceList data structure to detect
>>>duplicate Nonces. Deleting in-records won't affect duplicate Nonce
>>>detection.
>>> When DeadNonceList is introduced, the PIT NonceList is dropped, and
>>>PIT entry would report a duplicate Nonce only if any in-record or
>>>out-record contains a duplicate Nonce.
>>> This assumes that in-records are retained before straggler timer
>>>expires, but forwarding pipelines are not able to fulfill this
>>>assumption.
>>> 
>>> This issue is tracked as Bug 2592.
>>> Root cause is found, but I need to think about the design.
>>> 
>>> Yours, Junxiao
>>> 
>>> On Sat, Feb 28, 2015 at 11:02 PM, Lan Wang (lanwang)
>>><lanwang at memphis.edu> wrote:
>>> csu/dump.0csu:1425075159.119112 From: 1.0.0.42, To: 1.0.0.41, Tunnel
>>>Type: UDP, INTEREST:
>>>/ndn/edu/caida/ping/251436634?ndn.MustBeFresh=1&ndn.Nonce=251436634,
>>>size: 48
>>> csu/dump.4csu:1425075159.140391 From: 1.0.0.29, To: 1.0.0.30, Tunnel
>>>Type: UDP, INTEREST:
>>>/ndn/edu/caida/ping/251436634?ndn.MustBeFresh=1&ndn.Nonce=251436634,
>>>size: 48
>>> csu/dump.6csu:1425075159.179620 From: 1.0.0.50, To: 1.0.0.49, Tunnel
>>>Type: UDP, INTEREST:
>>>/ndn/edu/caida/ping/251436634?ndn.MustBeFresh=1&ndn.Nonce=251436634,
>>>size: 48
>>> csu/dump.4csu:1425075159.245887 From: 1.0.0.30, To: 1.0.0.29, Tunnel
>>>Type: UDP, DATA: /ndn/edu/caida/ping/251436634, size: 392
>>> csu/dump.0csu:1425075159.262268 From: 1.0.0.41, To: 1.0.0.42, Tunnel
>>>Type: UDP, DATA: /ndn/edu/caida/ping/251436634, size: 392
>>> csu/dump.5csu:1425075159.345243 From: 1.0.0.46, To: 1.0.0.45, Tunnel
>>>Type: UDP, INTEREST:
>>>/ndn/edu/caida/ping/251436634?ndn.MustBeFresh=1&ndn.Nonce=251436634,
>>>size: 48
>>> csu/dump.5csu:1425075159.359534 From: 1.0.0.45, To: 1.0.0.46, Tunnel
>>>Type: UDP, DATA: /ndn/edu/caida/ping/251436634, size: 392
>>> 
>>> Seems that CSU was not doing duplicate Interest suppression correctly.
>>>  The Interest from UIUC to CSU received at time 1425075159.345243
>>>should have been dropped because it's a duplicate of the earlier
>>>Interest from UCLA to CSU received at time 1425075159.119112, but it
>>>was not and instead brought data back.  The time difference between the
>>>two Interests is 226ms.  I'm not sure how the duplicate suppression is
>>>implemented.  Someone more familiar with the forwarding pipeline please
>>>help explain the above trace.  Thanks.
>> 
>> _______________________________________________
>> Nfd-dev mailing list
>> Nfd-dev at lists.cs.ucla.edu
>> http://www.lists.cs.ucla.edu/mailman/listinfo/nfd-dev
>





More information about the Nfd-dev mailing list