[Nfd-dev] NFD call comments: autoconfiguration

Junxiao Shi shijunxiao at email.arizona.edu
Fri May 8 12:10:54 PDT 2020


Dear folks

I have the following comments on the 2020-05-08 autoconfiguration
presentation.
To prevent complaints, I'm writing my comments instead of trying to speak
during conference calls.

I totally agree with Lixia's definition of NDN configuration:
The goal is to be able to publish and receive Data, including signing and
verification.
The steps include obtaining certificates and trust schemas (note that trust
anchor is part of trust schema, according to schematized trust paper).


Once again, the question about "whether a face is an interface or an
adjacency
<https://www.lists.cs.ucla.edu/pipermail/ndn-interest/2015-June/000716.html>"
came up.
My well-known opinion has been: a face is an interface
<https://hdl.handle.net/10150/625652>, and you should broadcast everything.
Hardware-based packet filtering can avoid degrading performance due to
broadcasting.
In today's call, Lixia also thinks a face should be an interface, and
broadcast should be used by default.
Note that there has been some confusion on terminology "multicast vs
broadcast". In NFD transports are called "multicast" because they use a
well-known multicast address. The effect is broadcasting to every node that
understands NDN. There is really no need to use the FF:FF:FF:FF:FF:FF
address and bother other nodes that do not even understand NDN.

One problem with "broadcast everything" is, today's WiFi access points
perform poorly on broadcast frames.
In today's call, Lixia believed that this is a "layer 9, human issue" that
IEEE 802.11 specifies multicast and broadcast frames to be transmitted at
1Mbps. This is a misunderstanding. The real reason is explained in
draft-ietf-mboned-ieee802-mcast-problems
<https://tools.ietf.org/html/draft-ietf-mboned-ieee802-mcast-problems-11>
section
3.1: WiFi multicast has no acknowledgment, so that the AP has to use the
basic rate to maximize the possibility of successful transmission.

To work around the WiFi broadcast problem, Teng wants to use unicast after
self-learning has learned a route.
The design I came up with was: every incoming packet has an EndpointId
<https://redmine.named-data.net/issues/4283> that indicates its source
unicast address. Then, the self-learning strategy can send an outgoing
packet via unicast, by tagging it with the same EndpointId.
The 2019-10-30 NFD call had a long discussion on this. During that call,
Lixia was unhappy that NDN nodes have to deal with unicast addressing, but
acknowledged the broadcast problem. Lixia agreed that unicast could be
used, but it must be "hidden under the rug".
In Lixia's mind, the "rug" would be "in the face" (as recorded in the duplicate
transmission suppression issue
<https://redmine.named-data.net/issues/1282#note-18>). However, the
algorithms need to know names, while the face does not know names, so these
features are unimplementable in the face.
Eventually, the EndpointId design was scrapped
<https://redmine.named-data.net/journals/26056/diff?detail_id=22926>.
Instead, whenever self-learning wants to send a packet via unicast, it must
create a unicast face if it does not exist. Effectively, face becomes an
adjacency.

I observe that "whether a face is an interface or an adjacency" has been a
major issue in NFD design, which impedes the design progress of
self-learning, duplicate transmission suppression
<https://redmine.named-data.net/issues/1282>, and other issues that may
need to use both Ethernet multicast and unicast.
We need a final and irrevocable agreement on this choice, because going
back and forth wastes development work.


The next question was: if a user has installed one or more NFD nodes, how
to know "NDN is working"?
This divides into two scenarios:

   - Scenario L: The user has two or more nodes, and does not need to
   connect to the global NDN testbed.
   - Scenario G: The user has only one node, and wants to connect to the
   global NDN testbed.

Lixia believes that only Scenario L is worth considering. However, I think
both should be supported, and they could be supported with the same
configuration.

In Scenario L, user experience would be:

   1. Start NFD on both nodes.
   2. On the producer node, run ndnpingserver /my-name , in which
   "/my-name" is any name that the user can make up, but it cannot start with
   "/localhost" or "/ndn".
   3. On the consumer node, run ndnping /my-name . This should get
   responses from the producer.

Note that there is no manual face creation and route insertion steps.

In Scenario G, user experience would be:

   1. Start NFD.
   2. Run ndnping /ndn/multicast . This should get responses from a testbed
   router.

Note that there is no manual face creation and route insertion steps.

To achieve both scenarios, the codebase changes include:

   - Set self-learning as the default strategy.
   - Execute ndn-autoconfig during NFD initialization, to connect to the
   testbed if reachable.
   - Set best-route as the strategy for /ndn prefix, but only after a
   testbed connection has been established.
   - Run ndnpingserver /ndn/multicast on three or more testbed routers.

The user needs to install both NFD and ndn-tools packages. I think this is
a reasonable requirement.


Alex commented that, sometimes the configuration could be completely
different depending on network environments.
In that case, the Ubuntu package could present a configuration menu that
allows a user to select a preferred configuration.
For example, there could be a menu with options:
[X] create a personal NDN network
[ ] join a personal NDN network
[ ] connect to the global NDN testbed


In today's call, Lixia mentioned that auto prefix propagation can only
propagate to one connected router. This is partially true.
The truth is: auto prefix propagation can only *reliably* propagate to one
connected router.
The reason is: every router listens on the same prefix /localhop/nfd/rib,
so that the end host cannot distinguish between them when sending prefix
registration commands (see #3142
<https://redmine.named-data.net/issues/3142>).
One could configure the end host to multicast the prefix registration
commands to several connected routers, but then the auto prefix propagation
component would be unable to receive all Data replies: if the command
succeeds on one connected router and fails on another connected router, the
auto prefix propagation component might receive only the successful
response, and thus would not retry the command.
There are two ways to resolve this problem:

   - #3142 <https://redmine.named-data.net/issues/3142> and #3143
   <https://redmine.named-data.net/issues/3143>: each connected router
   should listen on a different name prefix.
   - #3162 <https://redmine.named-data.net/issues/3162>: do not aggregate
   Interests with different forwarding hints, treat NextHopFaceId as
   forwarding hint, and use NextHopFaceId in prefix registration commands from
   the auto prefix propagation component.


Yours, Junxiao
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.lists.cs.ucla.edu/pipermail/nfd-dev/attachments/20200508/4238b5a0/attachment.html>


More information about the Nfd-dev mailing list